content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Overview This tutorial will guide you through the basics of editing in CryMannequin. We will be opening the editor and a setup file called a "preview setup". Then we will be creating & editing fragments, and finally we will: Opening the Editor Open up the CryMannequin Editor by clicking on the Mannequin icon on the toolbar: You can also open the CryMannequin Editor from the menu item View -> Open View Pane -> Mannequin Editor. In order to work in the CryMan, it is called "sdk_tutorial1preview.xml" The preview file sets a character up: - 1 - Left area: This can show either Fragments, Sequences or Transitions. Click on the tabs at the bottom of this area to show those different panels. We will explain later what they mean. - 2 - Right area: This can show either the Fragment Editor, the Transition Editor, Previewer or the Error Report panel. Again, you can already click through these to get a feeling for how they look. And again, we will explain later what they are for. Creating and Editing FragmentIDs. - We do this in the Fragments Panel (sometimes also called the Mannequin Fragment Browser), so make sure you have this panel open by clicking the Fragments tab on the bottom left. - Press the "New ID..." button to start creating a new FragmentID. Next we need to enter a name. For this tutorial we pick the name "Idle": : You will now notice that there is a new item in the Fragment Browser called Idle. ). Creating and Editing Fragments. Creating a Fragment Using Drag and Drop There are many ways to create fragments, but possibly the simplest is just dragging an animation on a FragmentID. First let's find the animation we want to refer to in the Character Editor. Keep the CryMannequin Editor opened. Open Character Editor by clicking the icon for it on the main toolbar: Now we look for the animation. - Make sure you have objects/characters/human/sdk_player/sdk_player.cdfopen (if not, open it through File/Open...). - If you want you can use the Filter to filter down the list of animations. In this example we just type 'idle'. - Look for 'stand_tac_idle_rifle_3p_01' (in the "stand" folder, the folders here correspond to the animation name prefixes). Next drag and drop this animation onto the Idle FragmentID we just created. The result will be: - We created a new fragment that shows up as 'option 1' in the Fragment Browser. The elements in the fragment browser with the movie icons in front of them are the Fragments. It is placed in the '<default>' subfolder of the FragmentID. This basically means that this fragment is the default option for this FragmentID: whenever the game requests Idle the system will play this fragment. We will create more options, more variations, later. - The fragment is automatically opened up in one of the panels on the right side: the CryMannequin Editor - We see that there is an animlayer (animation layer) inside the FullBody scope – indeed this is the scope we selected as default scope when we created the Idle FragmentID before. - The animation clip we dragged in is placed on the timeline for this layer. Playback Control Here is a quick overview of how you can review the fragment's animation: - Press here to play/pause the fragment, as well as change the playback speed by clicking on the little downwards pointing arrow. - Scrub by dragging/moving the pink marker using the left mouse button. With the right mouse button you can define the playback range by dragging two red triangles around on the timeline. - Press here to enable looping playback (the loop will be over the playback range you selected in 2). - Press here to toggle the time display from seconds to frames (30 fps) and back. If the sequencing panel is selected (press somewhere in the light grey area) you have access to a couple of keyboard shortcuts too: - spacebar: play/pause toggle - left/right arrow key: next/previous tick - home: move time back to beginning of sequence Adding More Clips to a Fragment Now we can make this fragment more complicated if we wanted to. For example let's drag another animation clip into the fragment: Your timeline now looks like this, and you notice the animation clip got added: Clip Zones You can identify the following zones on the timeline now: - Blend-in period of the first clip (this is currently ignored and might seem useless, but it will be used when you start to sequence this fragment after another fragment). - The period where the first clip is playing fully. - This is after the first clip has finished, but it will repeat its last key by default. - Blend-in period of the second clip. This is where the transition happens between the repeating last key from the first clip and the second clip. - The period where the second clip is playing fully. : Creating a Fragment Without Drag and Drop:: Animation Clip Properties. Adding Multiple Layers Now we can also show how to add multiple layers of animation within one fragment. Add another layer by using the same right click menu item as before: : Procedural Clips:: : . Moving Clips & Snapping. Copying a Fragment To copy a fragment, drag and drop the fragment using the right mouse button. Alternatively you can also use the more familiar keyboard shortcuts CTRL+C and CTRL+V. Deleting a Fragment To delete a fragment, for example the one you just created by copying, use the Delete button while you have the fragment selected. File Manager: Saving & Perforce Integration At any time you can save your changes with the Save Changes menu item: . If you have the Perforce plugin installed there is some (limited) Perforce integration. - Read-only files are marked with a lock icon. - You can check files out inside the file manager when needed. You are only allowed to save when all files are writable. - Press refresh to update the file status (for example when you manually removed the read-only flag from a file). ). Where to go Next.
https://docs.cryengine.com/display/SDKDOC2/Mannequin+Editor+Tutorial+1+-+Preview+Setup%2C+Fragments+and+Saving
2019-04-18T17:25:17
CC-MAIN-2019-18
1555578517745.15
[array(['/download/attachments/15012297/tutorial1_legend.png?version=1&modificationDate=1398682886000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_findingeditor.png?version=1&modificationDate=1398682871000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_loadpreviewsetup.png?version=1&modificationDate=1398682889000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_loadpreviewsetup2.png?version=1&modificationDate=1398682891000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_overview.png?version=1&modificationDate=1398682893000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_fragmentspanel.png?version=1&modificationDate=1398682884000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_fragmentidname.png?version=1&modificationDate=1398682882000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_fragmentideditor.png?version=1&modificationDate=1398682880000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_fragmentbrowserchanged.png?version=1&modificationDate=1398682876000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_findingfragmentideditor.png?version=1&modificationDate=1398682873000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_findingcharactereditor.png?version=1&modificationDate=1398682869000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_charactereditor.png?version=1&modificationDate=1398682826000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_dragontofragmentid.png?version=1&modificationDate=1398682844000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_defaultoption.png?version=1&modificationDate=1398682842000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_controllingtime.png?version=1&modificationDate=1398682828000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_addingclip.png?version=1&modificationDate=1398682830000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_addingclip2.png?version=1&modificationDate=1398682832000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_addingclip3.png?version=1&modificationDate=1398682835000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_addingclip4.png?version=1&modificationDate=1398682837000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_newbutton.png?version=1&modificationDate=1398682850000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_newfragment.png?version=1&modificationDate=1398682852000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_newfragment2.png?version=1&modificationDate=1398682854000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_newfragment3.png?version=1&modificationDate=1398682856000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_newfragment4.png?version=1&modificationDate=1398682858000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_newfragment5.png?version=1&modificationDate=1398682861000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_newfragment6.png?version=1&modificationDate=1398682863000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_newfragment7.png?version=1&modificationDate=1398682865000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_morelayers.png?version=1&modificationDate=1398682846000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_morelayers2.png?version=1&modificationDate=1398682848000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_proceduralclips_addlayer.png?version=1&modificationDate=1398682903000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_proceduralclips_addedlayer.png?version=1&modificationDate=1398682899000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_proceduralclips_addednone.png?version=1&modificationDate=1398682901000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_proceduralclips_playsound.png?version=1&modificationDate=1398682905000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_proceduralclips_playsound2.png?version=1&modificationDate=1398682907000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_creatingfragment_copying.png?version=1&modificationDate=1398682839000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_deletingfragment.png?version=1&modificationDate=1398682867000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_saving_menuitem.png?version=1&modificationDate=1398682911000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_saving_filemanager.png?version=1&modificationDate=1398682909000&api=v2', None], dtype=object) array(['/download/attachments/15012297/tutorial1_perforceintegration.png?version=1&modificationDate=1398682895000&api=v2', None], dtype=object) ]
docs.cryengine.com
Energy UK comments on latest EU carbon price Commenting on the EU carbon price, Energy UK's chief executive Lawrence Slade, said: “A further rise in the EU carbon price to a 10 year high underlines how recent reforms, partly driven by the UK, are continuing to restore confidence in the world’s largest emission trading scheme (EU ETS). “It is crucial that, post-Brexit, the UK remains in or closely linked to the EU ETS and we reiterate the need for urgent clarity from the Government on the UK’s future approach to carbon pricing as we prepare to leave the European Union.”
https://docs.energy-uk.org.uk/media-and-campaigns/press-releases/412-2018/6743-energy-uk-comments-on-latest-eu-carbon-price.html
2019-04-18T16:42:39
CC-MAIN-2019-18
1555578517745.15
[]
docs.energy-uk.org.uk
Contents IT Service Management Previous Topic Next Topic Transfer orders Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Transfer orders Use a transfer order to move necessary parts between company stockrooms or to a location where an agent can recieve them. The transfer order defines delivery dates, the stockrooms involved in the transfer, and the general status of the order. A transfer order contains one or more transfer order lines which allow the transfer of multiple parts or assets on one transfer order. A transfer order line describes the part, the quantity required, and the status of the part in the transfer process. The system creates a transfer order automatically when you create a transfer order line. You can add additional transfer order lines to a transfer order as long as the transfer order is in the Draft stage. When any of the transfer order lines advance to the next stage, the transfer order stage also advances, and can no longer accept additional transfer order lines. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/planning-and-policy/concept/c_TransferOrders.html
2019-04-18T16:52:11
CC-MAIN-2019-18
1555578517745.15
[]
docs.servicenow.com
we fix a bug, the fix is usually included in one of the future versions of CS-Cart. But it may take a while for a new version to be released, and a bug needs to be fixed now. That’s why we provide files in the unified DIFF format. Questions & Feedback Have any questions that weren't answered here? Need help with solving a problem in your online store? Want to report a bug in our software? Find out how to contact us.
https://docs.cs-cart.com/4.8.x/upgrade/index.html
2019-04-18T16:35:01
CC-MAIN-2019-18
1555578517745.15
[]
docs.cs-cart.com
Linino ; change microcontroller board_build.mcu = atmega32u4 ; change MCU frequency board_build.f_cpu = 16000000L Debugging¶ PIO Unified Debugger currently does not support Linino One board.
https://docs.platformio.org/en/latest/boards/atmelavr/one.html
2019-04-18T16:23:52
CC-MAIN-2019-18
1555578517745.15
[]
docs.platformio.org
Running LinchPin¶ This guide will walk you through the basics of using LinchPin. LinchPin is a command-line utility, a Python API, and Ansible playbooks. As this guide is intentionally brief to get you started, a more complete version can be found in the documentation links found to the left in the index. Topics - Running LinchPin - Running the linchpincommand - Workspaces - Resources - Provisioning (up) - Teardown (destroy) - Authentication Running the linchpin command¶ The linchpin CLI is used to perform tasks related to managing resources. For detail about a specific command, see Commands (CLI). Getting Help¶ Getting help from the command line is very simple. Running either linchpin or linchpin --help will yield the command line help page. $ linchpin --help Usage: linchpin [OPTIONS] COMMAND [ARGS]... linchpin: hybrid cloud orchestration Options: -c, --config PATH Path to config file -p, --pinfile PINFILE Use a name for the PinFile different from the configuration. -d, --template-data TEMPLATE_DATA Template data passed to PinFile template -o, --output-pinfile OUTPUT_PINFILE Write out PinFile to provided location -w, --workspace PATH Use the specified workspace. Also works if the familiar Jenkins WORKSPACE environment variable is set -v, --verbose Enable verbose output --version Prints the version and exits --creds-path PATH Use the specified credentials path. Also works if CREDS_PATH environment variable is set -h, --help Show this message and exit. Commands: init Initializes a linchpin project. up Provisions nodes from the given target(s) in... destroy Destroys nodes from the given target(s) in... fetch Fetches a specified linchpin workspace or... journal Display information stored in Run Database... For subcommands, like linchpin up, passing the --help or -h option produces help related to the provided subcommand. $ linchpin up -h Usage: linchpin up [OPTIONS] TARGETS Provisions nodes from the given target(s) in the given PinFile. targets: Provision ONLY the listed target(s). If omitted, ALL targets in the appropriate PinFile will be provisioned. run-id: Use the data from the provided run_id value Options: -r, --run-id run_id Idempotently provision using `run-id` data -h, --help Show this message and exit. As can easily be seen, linchpin up has additional arguments and options. Basic Usage¶ The most basic usage of linchpin might be to perform an up action. This simple command assumes a PinFile in the workspace (current directory by default), with one target dummy. $ linchpin up Action 'up' on Target 'dummy' is complete Target Run ID uHash Exit Code ------------------------------------------------- dummy 75 79b9 0 Upon completion, the systems defined in the dummy target will be provisioned. An equally basic usage of linchpin is the destroy action. This command is peformed using the same PinFile and target. $ linchpin destroy Action 'destroy' on Target 'dummy' is complete Target Run ID uHash Exit Code ------------------------------------------------- dummy 76 79b9 0 Upon completion, the systems which were provisioned, are destroyed (or torn down). Options and Arguments¶ The most common argument available in linchpin is the TARGET. Generally, the PinFile will have many targets available, but only one or two will be requested. $ linchpin up dummy-new libvirt-new Action 'up' on Target 'dummy' is complete Action 'up' on Target 'libvirt' is complete Target Run ID uHash Exit Code ------------------------------------------------- dummy 77 73b1 0 libvirt 39 dc2c 0 In some cases, you may wish to use a different PinFile. $ linchpin -p PinFile.json up Action 'up' on Target 'dummy-new' is complete Target Run ID uHash Exit Code ------------------------------------------------- dummy-new 29 c70a 0 As you can see, this PinFile had a target called dummy-new, and it was the only target listed. Other common options include: - --verbose( -v) to get more output - --config( -c) to specify an alternate configuration file - --workspace( -w) to specify an alternate workspace Combining Options¶ The linchpin command also allows combinining of general options with subcommand options. A good example of these might be to use the verbose ( -v) option. This is very helpful in both the up and destroy subcommands. $ linchpin -v up dummy-new -r 72 using data from run_id: 72 rundb_id: 73 uhash: a48d calling: preup hook preup initiated PLAY [schema check and Pre Provisioning Activities on topology_file] ******** TASK [Gathering Facts] ****************************************************** ok: [localhost] TASK [common : use linchpin_config if provided] ***************************** What can be immediately observed, is that the -v option provides more verbose output of a particular task. This can be useful for troubleshooiting or giving more detail about a specitic task. The -v option is placed before the subcommand. The -r option, since it applies directly to the up subcommand, it is placed afterward. Investigating the linchpin -help and linchpin up --help can help differentiate if there’s confusion. Common Usage¶ Verbose Output¶ $ linchpin -v up dummy-new Specify an Alternate PinFile¶ $ linchpin -vp Pinfile.alt up Specify an Alternate Workspace¶ $ export WORKSPACE=/tmp/my_workspace $ linchpin up libvirt or $ linchpin -vw /path/to/workspace destroy openshift Provide Credentials¶ $ export CREDS_PATH=/tmp/my_workspace $ linchpin -v up libvirt or $ linchpin -v --creds-path /credentials/path up openstack Note The value provided to the --creds-path option is a directory, NOT a file. This is generally due to the topology containing the filename where the credentials are stored. Workspaces¶ Initialization (init)¶ Resources¶ With LinchPin, resources are king. Defining, managing, and generating outputs are all done using a declarative syntax. Resources are managed via the PinFile. The PinFile can hold two additional files, the topology, and layout. Linchpin also supports hooks. Topology¶ The topology is declarative, written in YAML or JSON (v1.5+), and defines how the provisioned systems should look after executing the linchpin up command. A simple dummy topology is shown here. --- topology_name: "dummy_cluster" # topology name resource_groups: - resource_group_name: "dummy" resource_group_type: "dummy" resource_definitions: - name: "web" role: "dummy_node" count: 1 This topology describes a single dummy system that will be provisioned when linchpin up is executed. Once provisioned, the resources outputs are stored for reference and later lookup. Additional topology examples can be found in the source code. Inventory Layout¶ An inventory_layout (or layout) is written in YAML or JSON (v1.5+), and defines how the provisioned resources should look in an Ansible static inventory file. The inventory is generated from the resources provisioned by the topology and the layout data. A layout is shown here. --- inventory_layout: vars: hostname: __IP__ hosts: example-node: count: 1 host_groups: - example The above YAML allows for interpolation of the ip address, or hostname as a component of a generated inventory. A host group called example will be added to the Ansible static inventory. The all group always exists, and includes all provisioned hosts. $ cat inventories/dummy_cluster-0446.inventory [example] web-0446-0.example.net hostname=web-0446-0.example.net [all] web-0446-0.example.net hostname=web-0446-0.example.net Note A keen observer might notice the filename and hostname are appended with -0446. This value is called the uhash or unique-ish hash. Most providers allow for unique identifiers to be assigned automatically to each hostname as well as the inventory name. This provides a flexible way to repeat the process, but manage multiple resource sets at the same time. Advanced layout examples can be found by reading ra_inventory_layouts. Note Additional layout examples can be found in the source code. PinFile¶ A PinFile takes a topology and an optional layout, among other options, as a combined set of configurations as a resource for provisioning. An example Pinfile is shown. dummy_cluster: topology: dummy-topology.yml layout: dummy-layout.yml The PinFile collects the given topology and layout into one place. Many targets can be referenced in a single PinFile. More detail about the PinFile can be found in the PinFiles document. Additional PinFile examples can be found in the source code Provisioning (up)¶ Once a PinFile, topology, and optional layout are in place, provisioning can happen. Performing the command linchpin up should provision the resources and inventory files based upon the topology_name value. In this case, is dummy_cluster. $ linchpin up target: dummy_cluster, action: up Action 'up' on Target 'dummy_cluster' is complete Target Run ID uHash Exit Code ------------------------------------------------- dummy_cluster 70 0446 0 As you can see, the generated inventory file has the right data. This can be used in many ways, which will be covered elsewhere in the documentation. $ cat inventories/dummy_cluster-0446.inventory [example] web-0446-0.example.net hostname=web-0446-0.example.net [all] web-0446-0.example.net hostname=web-0446-0.example.net To verify resources with the dummy cluster, check /tmp/dummy.hosts $ cat /tmp/dummy.hosts web-0446-0.example.net test-0446-0.example.net Teardown (destroy)¶ As expected, LinchPin can also perform teardown of resources. A teardown action generally expects that resources have been provisioned. However, because Ansible is idempotent, linchpin destroy will only check to make sure the resources are up. Only if the resources are already up will the teardown happen. The command linchpin destroy will look up the resources and/or topology files (depending on the provider) to determine the proper teardown procedure. The dummy Ansible role does not use the resources, only the topology during teardown. $ linchpin destroy target: dummy_cluster, action: destroy Action 'destroy' on Target 'dummy_cluster' is complete Target Run ID uHash Exit Code ------------------------------------------------- dummy_cluster 71 0446 0 Verify the /tmp/dummy.hosts file to ensure the records have been removed. $ cat /tmp/dummy.hosts -- EMPTY FILE -- Note The teardown functionality is slightly more limited around ephemeral resources, like networking, storage, etc. It is possible that a network resource could be used with multiple cloud instances. In this way, performing a linchpin destroy does not teardown certain resources. This is dependent on each providers implementation. Authentication¶ Some Providers require authentication to acquire managing_resources. LinchPin provides tools for these providers to authenticate. The tools are called credentials. Credentials¶ Credentials come in many forms. LinchPin wants to let the user control how the credentials are formatted. In this way, LinchPin supports the standard formatting and options for a provider. The only constraints that exist are how to tell LinchPin which credentials to use, and where they credentials data resides. In every case, LinchPin tries to use the data similarly to the way the provider might. Credentials File¶ An example credentials file may look like this for aws. $ cat aws.key [default] aws_access_key_id=ARYA4IS3THE3NO7FACEB aws_secret_access_key=0Hy3x899u93G3xXRkeZK444MITtfl668Bobbygls [herlo_aws1_herlo] aws_access_key_id=JON6SNOW8HAS7A3WOLF8 aws_secret_access_key=Te4cUl24FtBELL4blowSx9odd0eFp2Aq30+7tHx9 To use these credentials, the user must tell LinchPin two things. The first is which credentials to use. The second is where to find the credentials data. Using Credentials¶ In the topology, a user can specific credentials. The credentials are described by specifying the file, then the profile. As shown above, the filename is ‘aws.key’. The user could pick either profile in that file. --- topology_name: ec2-new resource_groups: - resource_group_name: "aws" resource_group_type: "aws" resource_definitions: - name: demo-day flavor: m1.small role: aws_ec2 region: us-east-1 image: ami-984189e2 count: 1 credentials: filename: aws.key profile: default The important part in the above topology is the credentials section. Adding credentials like this will look up, and use the credentials provided. Credentials Location¶ By default, credential files are stored in the default_credentials_path, which is ~/.config/linchpin. Hint The default_credentials_path value uses the interpolated default_config_path value, and can be overridden in the linchpin.conf. The credentials path (or creds_path) can be overridden in two ways. It can be passed in when running the linchpin command. $ linchpin -vvv --creds-path /dir/to/creds up aws-ec2-new Note The aws.key file could be placed in the default_credentials_path. In that case passing --creds-path would be redundant. Or it can be set as an environment variable. $ export CREDS_PATH=/dir/to/creds $ linchpin -v up aws-ec2-new See also - Commands (CLI) - Linchpin Command-Line Interface - Common Workflows - Common LinchPin Workflows - Managing Resources - Managing Resources - Providers - Providers in Detail
https://linch-pin.readthedocs.io/en/develop/running_linchpin.html
2019-04-18T16:41:18
CC-MAIN-2019-18
1555578517745.15
[]
linch-pin.readthedocs.io
Pic: Decal Component At its core the entities themselves are the basis for all logic within CRYENGINE and allow you to create the dynamic gameplay needed for interaction. By default we also supply tools that allow you to access and contain the functionality to areas. Included within this topic is the notion of components that are the newest tech for modular building of entity complexity. Standard ComponentsLegacy EntitiesArchetypes*Misc Objects*Area Objects* Pic: Create Object Window The entities for the most part exist within the Create Objects tab and then also are able to be created within the Asset Browser. For this reason we have linked these two tools for easier reference on the areas of the Editor interface that contain this logic.
https://docs.cryengine.com/display/CEMANUAL/Entities+and+Tools
2019-04-18T16:52:24
CC-MAIN-2019-18
1555578517745.15
[]
docs.cryengine.com
Energy UK responds to Aurora Energy Research's report Responding to Aurora Energy Research's report, Energy UK's chief executive Lawrence Slade said: “With the performance of electric vehicles improving – and their costs falling – it will be essential that the supporting infrastructure is able to keep pace with fast growing consumer demand. EVs and other forms of low carbon transport have an enormous and widespread potential - to reduce emissions, slash harmful levels of air pollution in our towns and cities and bring economic benefits from the UK becoming a world leader in the technology involved. Consumers could also benefit from the ability to store and supply electricity, which could have a transformative effect on our energy system and energy bills. “It is vital to remove barriers to uptake of charge points at commercial and industrial sites, including reducing the cost of electricity connections as is being explored in Ofgem’s charging and access review. The energy industry is fully committed to working with other sectors to support faster decarbonisation of the transport sector, which is why Energy UK recently established an EV Charging Forum, bringing together charge point operators.”
https://docs.energy-uk.org.uk/media-and-campaigns/press-releases/412-2018/6833-energy-uk-responds-to-aurora-energy-research-s-report.html
2019-04-18T17:03:12
CC-MAIN-2019-18
1555578517745.15
[]
docs.energy-uk.org.uk
Funnel elements are what make magic happen in Groundhogg. We have thoughtfully designed a methodology for implementing custom actions & benchmarks which you can find below. To see example implementations of some elements, you can see the implementation of the standard elements which come bundled with Groundhogg. Elements are located in /includes/elements/
https://docs.groundhogg.io/docs/developer-docs/custom-funnel-elements/
2019-04-18T17:21:54
CC-MAIN-2019-18
1555578517745.15
[]
docs.groundhogg.io
Update-Package (Package Manager Console in Visual Studio) Available only within the NuGet Package Manager Console in Visual Studio on Windows. Updates a package and its dependencies, or all packages in a project, to a newer version. Syntax Update-Package [-Id] <string> [-IgnoreDependencies] [-ProjectName <string>] [-Version <string>] [-Safe] [-Source <string>] [-IncludePrerelease] [-Reinstall] [-FileConflictAction] [-DependencyVersion] [-ToHighestPatch] [-ToHighestMinor] [-WhatIf] [<CommonParameters>]. Parameters None of these parameters accept pipeline input or wildcard characters. Common Parameters Update-Package supports the following common PowerShell parameters: Debug, Error Action, ErrorVariable, OutBuffer, OutVariable, PipelineVariable, Verbose, WarningAction, and WarningVariable. Examples # Updates all packages in every project of the solution Update-Package # Updates every package in the MvcApplication1 project Update-Package -ProjectName MvcApplication1 # Updates the Elmah package in every project to the latest version Update-Package Elmah # Updates the Elmah package to version 1.1.0 in every project showing optional -Id usage Update-Package -Id Elmah -Version 1.1.0 # Updates the Elmah package within the MvcApplication1 project -ProjectName MvcApplication1 -Safe # Reinstall the same version of the original package, but with the latest version of dependencies # (subject to version constraints). If this command rolls a dependency back to an earlier version, # use Update-Package <dependency_name> to reinstall that one dependency without affecting the # dependent package. Update-Package ELmah –reinstall # Reinstall the Elmah package in just MyProject Update-Package Elmah -ProjectName MyProject -reinstall # Reinstall the same version of the original package without touching dependencies. Update-Package ELmah –reinstall -ignoreDependencies Feedback Send feedback about:
https://docs.microsoft.com/en-us/nuget/tools/ps-ref-update-package
2019-04-18T17:03:21
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Conceptual Reference
https://docs.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-tounicodeex
2019-04-18T17:19:11
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Access from Webmail To access your mailbox through webmail, do any of the following: - an icon corresponding to the email address you need. Note: If you cannot open the webmail page, make sure that a webmail solution is enabled. Open the Mail section, then the Mail Settings tab, click the name of the domain for which webmail is inaccessible, and select a webmail client in the Webmail menu.
https://docs.plesk.com/en-US/12.5/administrator-guide/website-management/quick-start-with-plesk/set-up-mail-accounts/2-access-your-mailbox/access-from-webmail.69294/
2019-04-18T16:43:21
CC-MAIN-2019-18
1555578517745.15
[]
docs.plesk.com
Create a dashboard - Use one of the following options. - Provide a Title, ID, and Description for the dashboard. - Specify permissions. - Save the dashboard. Use one of the following options. - Add panels, convert the dashboard to a form, or edit dashboard content. For more information, see the following.5, 7.2.4, 7.2.6 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/6.5.0/Viz/CreateDashboards
2019-04-18T16:48:08
CC-MAIN-2019-18
1555578517745.15
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Architecture Anatomy of Visual Style Builder Visual Style Builder is an end-user application that allows fast and intuitive styling of all controls in the Windows Forms suite. The application is divided into the following major parts: Control Metadata Tree Element States This part lists all the VisualStates, visible as defined by the associated StateManager, for the currently selected metadata in the Metadata Tree. Elements Grid This part contains all the ElementMetadata instances, associated with an ItemMetadata. These are definitions for all the elements that can be styled. As seen from the screen, this list contains definition for the RadButtonElement itself as well as for its primitive children that do not have own StateManager. An embedded PropertyGrid allows editing of properties directly in the grid itself. Repository This part lists all the repository items available for the currently edited theme. Items are filtered by the type of the currently selected element in the Elements Grid – for example if a FillPrimitive is selected the only Fill repository items are listed. Preview/Design View This part displays an instance of the currently selected control in the metadata tree. Two different views are supplied – Preview and Design – where the preview one simply hosts the control while the design one adds extended functionality over the hosted control. Visual Style Builder Selection Path.
https://docs.telerik.com/devtools/winforms/tools/visual-style-builder/architecture
2019-04-18T16:17:17
CC-MAIN-2019-18
1555578517745.15
[array(['images/tools-visual-style-builder-architecture001.png', 'tools-visual-style-builder-architecture 001'], dtype=object) array(['images/tools-visual-style-builder-architecture002.png', 'tools-visual-style-builder-architecture 002'], dtype=object) array(['images/tools-visual-style-builder-architecture003.png', 'tools-visual-style-builder-architecture 003'], dtype=object) array(['images/tools-visual-style-builder-architecture004.png', 'tools-visual-style-builder-architecture 004'], dtype=object) array(['images/tools-visual-style-builder-architecture005.png', 'tools-visual-style-builder-architecture 005'], dtype=object)]
docs.telerik.com
Obtain application and server credentials Welcome to your new Bitnami application running on VMware vRealize Automation! VMware Solutions server using an SSH client and execute commands on the server using the command line. The SSH connection is only available if it was previously enabled in the base template. In that case, the SSH password is defined in the base template of the machine where clone from. Contact your datacenter administrator for more information. What is the administrator username set for me to log in to the application for the first time? Username: user How do I access the application administration panel? Access the administration panel by browsing to.
https://docs.bitnami.com/vmware/apps/wordpress/get-started/first-steps/
2019-04-18T17:42:31
CC-MAIN-2019-18
1555578517745.15
[]
docs.bitnami.com
Installing the GRAT Component Purpose - To run the the installation package for the GRAT, after the applications are configured in Configuration Manager. Prerequisites - Creating the Rules Repository Database - Creating the GRAT Application Objects in Configuration Manager Start - From the host on which the GRAT is to be installed, locate and double-click Setup.exe in the rulesauthoring GRAT application that you created in Creating the GRAT Application Objects in Configuration Manager. Click Next. - Specify the destination directory for the installation, or accept the default location, and click Next. - Enter the host and port of the optional backup Configuration Server and click Next. - Enter the number of times that the GRAT Server application should attempt to reconnect to Configuration Server (Attempts) and the amount of time (Delay) between attempts. Click Next. - On the screen that is shown in Creating the GRAT Application Objects in Configuration Manager enter the name of the rules authoring client application and click Next. - Select the database engine that you used to create the Rules Repository database in Creating the Rules Repository Database, and click Next. - Enter the following connection details and then click Next: ImportantThe values provided for the following properties (Connector Class and Database URL) are examples only. Consult the database vendor's jdbc documentation for detailed information specific to your database type. See this table for example values for the supported database types. - Connector class - The connector class will differ depending on the database type. See this table for examples. - Database URL - The database URL will depend on the database type. See this table for examples. ImportantConsult the database vendor's JDBC documentation for detailed information specific to your database type. ImportantWhere GRAT is going to run under JBOSS, you must put the JDBC driver library .jar either in a location that is part of the GRAT application's classpath or directly into the .war, rather than creating and configuring a datasource. - User Name - The user name that is used to connect to the database. The user must have write permissions on the database created in Creating the Rules Repository Database. - Password - The password that is used to connect to the database End Next Steps - Before using the GRAT, you will need to set up users and roles. See Role Task Permissions and Configuring a User for more information. Creating GRAT Application Objects in Configuration Manager Example Values for Database Connection Parameters This page was last modified on November 11, 2013, at 07:30. Feedback Comment on this article:
https://docs.genesys.com/Documentation/GRS/8.1.3/Deployment/InstallingtheGRATComponent2
2019-04-18T16:31:38
CC-MAIN-2019-18
1555578517745.15
[]
docs.genesys.com
Enumerable.LongCount<TSource> Method (IEnumerable<TSource>) Namespace: System.Linq Assembly: System.Core (in System.Core.dll) Syntax 'Declaration <ExtensionAttribute> _ Public Shared Function LongCount(Of TSource) ( _ source As IEnumerable(Of TSource) _ ) As Long public static long LongCount<TSource>( this IEnumerable<TSource> source ) Type Parameters - TSource The type of the elements of source. Parameters - source Type: System.Collections.Generic.IEnumerable<TSource> An IEnumerable<T> that contains the elements to be counted. Return Value Type: System.Int64 The number of elements in the source sequence. Usage Note In Visual Basic and C#, you can call this method as an instance method on any object of type IEnumerable<TSource>. When you use instance method syntax to call this method, omit the first parameter. Exceptions Remarks Use this method rather than Count when you expect the result to be greater than MaxValue. In Visual Basic query expression syntax, an Aggregate Into LongCount() clause translates to an invocation of LongCount. Examples The following code example demonstrates how to use LongCount<TSource>(IEnumerable<TSource>) to count the elements in an array. ' Create an array of strings. Dim fruits() As String = _ {"apple", "banana", "mango", "orange", "passionfruit", "grape"} ' Get the number of items in the array. Dim count As Long = fruits.LongCount() ' Display the result. outputBlock.Text &= "There are " & count & " fruits in the collection." & vbCrLf ' This code produces the following output: ' ' There are 6 fruits in the collection. string[] fruits = { "apple", "banana", "mango", "orange", "passionfruit", "grape" }; long count = fruits.LongCount(); outputBlock.Text += String.Format("There are {0} fruits in the collection.", count) + "\n"; /* This code produces the following output: There are 6 fruits in the
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/bb353539%28v%3Dvs.95%29
2019-04-18T16:38:50
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
This section creates the necessary virtual networks to support launching one more instances. Networking option 1 includes one public virtual network and one instance that uses it. Networking option 2 includes one public virtual network, one private virtual network, and one instance that uses each network. The instructions in this section use command-line interface (CLI) tools on the controller node. For more information on the CLI tools, see the OpenStack User Guide. To use the dashboard, see the OpenStack User Guide. Create virtual networks for the networking option that you chose in Add the Networking service. If you chose option 1, create only the public virtual network. If you chose option 2, create the public and private virtual networks. After creating the appropriate networks for your environment, you can continue preparing the environment to launch an instance. Most cloud images support public key authentication rather than conventional password authentication. Before launching an instance, you must add a public key to the Compute service. Source the demo tenant credentials: $ source demo-openrc.sh Generate and add a key pair: $ ssh-keygen -q -N "" $ nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey Note Alternatively, you can skip the ssh-keygen command and use an existing public key. Verify addition of the key pair: $ nova keypair-list +-------+-------------------------------------------------+ | Name | Fingerprint | +-------+-------------------------------------------------+ | mykey | 6c:74:ec:3a:08:05:4e:9e:21:22:a6:dd:b2:62:b8:28 | +-------+-------------------------------------------------+ By default, the default security group applies to all instances and includes firewall rules that deny remote access to instances. For Linux images such as CirrOS, we recommend allowing at least ICMP (ping) and secure shell (SSH). Add rules to the default security group: $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ Permit secure shell (SSH) access: $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ If you chose networking option 1, you can only launch an instance on the public network. If you chose networking option 2, you can launch an instance on the public network and the private network. If your environment includes the Block Storage service, you can create a volume and attach it to an instance. If your environment includes the Orchestration service, you can create a stack that launches an instance. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/liberty/install-guide-rdo/launch-instance.html
2019-04-18T16:41:05
CC-MAIN-2019-18
1555578517745.15
[]
docs.openstack.org
The PORTING checkers identify code that might rely on specific implementation details in different compilers. The PORTING.CAST.PTR.FLTPNT checker detects a cast between pointers to types that aren't both floating point or non-floating point. Casting of a floating point expression to a non-floating point data type may be a safe operation on certain platforms, but it can't be guaranteed to be successful on all compiler implementations. This checker warns you of expressions that explicitly or implicitly cast a floating point value to a non-floating point value in case you need to take action.
https://docs.roguewave.com/en/klocwork/current/porting.cast.ptr.fltpnt
2019-04-18T16:48:50
CC-MAIN-2019-18
1555578517745.15
[]
docs.roguewave.com
Command Line Interface (CLI) Documentation App Center command line interface is a unified tool for running App Center services from the command line. Our aim is to offer a concise and powerful tool for our developers to use App Center services and easily script a sequence of commands that they'd like to execute. You can currently login and view/configure all the apps that you have access to in App Center. Although our current feature set is minimal, all the existing App Center services will be added going forward. Note that the App Center CLI is currently in public preview. To get more information on CLI installation and currently supported commands, please refer to App Center CLI GitHub repo. Getting Started Pre-requisites App Center CLI requires Node.js version 8 or better. Installation Open a terminal/command prompt, and run npm install -g appcenter-cli. Logging in - Open a terminal/command window. - Run appcenter login. This will open a browser and generate a new API token. - Copy the API tokenfrom the browser, and paste this into the command window. - The command window will display Logged in as {user-name}. - Congratulations! You are successfully logged in and can run CLI commands. There are two ways to use App Center CLI commands without running appcenter login prior: Using the --token parameter: - Create a Full AccessAPI token (See steps 1-5). - Open a terminal/command window. - Add the --tokenswitch to the CLI command you are running. For example, run appcenter apps list --token {API-token}to get a list of your configured applications. Using the APPCENTER_ACCESS_TOKEN environment variable: You can set the APPCENTER_ACCESS_TOKEN environment variable with your API token. This will work without having to append the --token switch to each CLI command. Running your first CLI command - Open a terminal/command window. - Run appcenterto see a list of CLI commands. - Run appcenter profile listto get the information about logged in user. For more details on a list of CLI commands, please refer to App Center CLI GitHub repo. A note about using the --app parameter: Due to restrictions in how app name parsing is done, application names must not begin with hyphens or other ambigious characters that may confuse GNU style parsers. You can read more about this in the associated CLI issue. Feedback Send feedback about:
https://docs.microsoft.com/en-us/appcenter/cli/
2019-04-18T17:38:51
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
JD.UN.MET occurs if no call for method is found in the analyzed code. (This checker is triggered only by non-private methods that were not overloaded, or did not overload something). Unused methods can be used as back doors. They also increase code footprint in memory. Additionally, they increase the size of source code which decreases maintainability. Remove unused methods. Be careful when removing methods. Make sure the method was not designed for extensibility or is not a library method that may be used by code that is not part of the code you are analyzing with Klocwork. 9 static class MyClass { 10 void foo(){ 11 System.err.println("Hello, World!"); 12 } 13 } JD.UN.MET is reported for method declaration on line 10: Method 'foo()' is never called in the analyzed context.
https://docs.roguewave.com/en/klocwork/current/jd.un.met
2019-04-18T16:45:05
CC-MAIN-2019-18
1555578517745.15
[]
docs.roguewave.com
Note This feature was a closed pilot experiment. This feature is not supported for new users. To include the content of an existing course in another system, you use the edX LMS to find the location identifiers for the content you want to include. You then format the identifiers into an LTI URL. You might find using a tool like a spreadsheet helpful as a way to organize the course ID and each of the usage IDs that correspond to the course content you want to include in an external LMS. The identifier for your course can be in one of these formats. {key type}:{org}+{course}+{run}, for example, course-v1:edX+DemoX+2014 {org}/{course}/{run}, for example, edX/DemoX/2014 Courses created since Fall 2014 typically have an ID that uses the first format, while older courses have IDs that use the second format. To find the course ID for your course, follow these steps. For example, you open the “Blended Learning with edX” course to the Course page for the course. The URL for the Course page is. From the URL, you determine that the course ID is course-v1:edX+BlendedX+1T2015. Another example is the edX DemoX course. The URL is, and its course ID is course-v1:edX+DemoX.1+2T2017. The same course ID applies to every item of content in the course. The identifier for a specific component, unit, or subsection in your course can be in one of these formats. {key type}:{org}+{course}+{run}+type@{type}+block@{display name}, for example, block-v1:edX+DemoX+2014+type@sequential+block@basic_questions i4x:;_;_{org};_{course};_{type};_{display name}, for example, i4x:;_;_edX;_DemoX;_sequential;_basic_questions Courses created since Fall 2014 typically have usage IDs in the first format, while older courses have usage IDs in the second format. The following terms are used in the usage identifiers to indicate subsections, units, and components. The example usage IDs shown above include the word “sequential”, so they indicate subsections in a course. Several methods are available to help you find the usage IDs for items in your course. To find the usage ID for a unit or a component in the LMS, you can use either of these methods. To find the usage ID for a subsection, you view the page source. Note You must have the Staff or Admin role in a course to follow these procedures for finding usage IDs. To find the usage ID for a unit or component in the LMS, follow these steps. In the edX LMS, open your course. Select Course, and then go to the page that contains the unit or component. Select Staff Debug Info. To find the usage ID for a component, find the location. For example, location = block-v1:edX+BlendedX+1T2015+type@html+block@2114b1b8fd7947d28fba53414459ff01 To find the usage ID for a unit, scroll down to find the parent. For example, parent block-v1:edX+BlendedX+1T2015+type@vertical+block@ae7d9c34c2f34f7aa793ed7b55543ae5 The usage ID value begins with block-v1 for newer courses or i4x:// for older courses. If you are using a spreadsheet to organize your location identifiers, you can select the usage ID value, and then copy and paste it into the spreadsheet. To close the Staff Debug viewer, click on the browser page outside of the viewer. For more information, see Staff Debug Info. To find the usage ID for a subsection, unit, or component, you view the HTML page source for that page of the edX course. To find the usage ID for a subsection, unit, or component, follow these steps. In the edX LMS, open your course. Select Course, and then go to the page with the content that you want to include in an external LMS. Open the HTML source for the page. For example, in a Chrome browser you right click on the page, and then select View Page Source. Use your browser’s Find feature to locate the term data-usage-id. This attribute contains the usage ID. Review the value for the usage id to determine the part of the course it identifies: the sequential (subsection), a unit (vertical) or a specific component (problem, html, or video). Important You might need to search beyond the first match to retrieve the usage ID for the content you want to identify. Be sure to check the data-usage-id for sequential, vertical, or problem, html, or video to be sure that you specify the content that you want. For example, you want to link to a subsection in the edX Demo course. You open the course, go to the problem, and then right click to view the page source. When you search for data-usage-id, the first match is block-v1:edX+DemoX+Demo_Course+type@sequential+block@basic_questions. You verify that this usage ID value is for the subsection by checking for the presence of sequential. A more complex example gets the usage ID for the Drag and Drop problem in the edX DemoX course. The Drag and Drop problem is the second problem in the first homework assignment in Week 1 of the course. After you view the page source and data-usage-id, the first match is for the subsection (sequential). You search again, and see a usage ID that uses a slightly different format than the first usage ID, but contains the word “vertical”, so you know that it is for the unit. The third time that you search, you get the usage ID for the first of the problems (problem) in the assignment. You search again, and find the usage ID for the second problem in the assignment, block-v1:edX+DemoX+Demo_Course+type@problem+block@d2e35c1d294b4ba0b3b1048615605d2a. If you are using a spreadsheet to organize your location identifiers, you can select the usage ID value within the quotation marks or " ISO codes, and then copy and paste it into the spreadsheet. To identify the edX content that you want to include in an external LMS, you provide its URL. This URL has the following format. https://{host}/lti_provider/courses/{course_id}/{usage_id} To construct the LTI URL, you add your course ID and the specific content ID. Examples of the possible formats for an LTI URL follow. LTI URLs for a subsection include “sequential”, as follows.:;_;_edX;_DemoX;_sequential;_graded_simulations LTI URLs for a unit include “vertical”, as follows.:;_;_edX;_DemoX;_vertical;_d6cee45205a449369d7ef8f159b22bdf LTI URLs for HTML components include “html+block” or “html”, as follows.:;_;_edX;_DemoX;_html;_2b94658d2eee4d85ae13f83bc24cfca9
https://edx.readthedocs.io/projects/edx-partner-course-staff/en/latest/course_features/lti/lti_address_content.html
2019-04-18T17:12:36
CC-MAIN-2019-18
1555578517745.15
[]
edx.readthedocs.io
Versioning Strategy The goal of this versioning strategy is both to : - Release often, release early in order to get quick feedback from the SonarQube community - Release stable versions of the SonarQube platform for companies whose main priority is to set up a very stable environment. Even if the price for such stable environments is missing out on the latest, sexy SonarQube features - Support the API deprecation strategy (see next section) The rules are : - Each ~two months a new version of SonarQube is released. This version should increment the minor digit of the previous version (ex: 4.2 -> 4.3) - After three (or more) releases, a bug-fix version is released, and becomes the new LTS. The major digit of the subsequent version is incremented to start a new cycle (ex: 5.6 -> 6.0) And here is the strategy in action : 4.4 -> 4.5 -> 5.0 -> 5.1 -> 5.2 -> ... -> 5.5 -> 6.0 -> ... <- New release every ~2 months | | 4.5.1 -> 4.5.2 -> ... 5.5.1 -> 5.5.2 -> ... <- New LTS API Deprecation Strategy The goal of this deprecation strategy is to make sure that deprecated APIs will be dropped without side-effects at a given planned date. The expected consequence of such strategy is to ease the evolution of the SonarQube API by making such refactoring painless. The rules are: - An+2).0. Example: an API deprecated in 4.1 is supported in 4.2, 4.3, 5.0, 5.1, 5.2, 5.3 and is dropped in version 6.0. - According to the versioning strategy, that means that an API can remain deprecated before being dropped during 6 to 12 months. - Any release of a SonarQube plugin must at least depend on the latest LTS version of the SonarQube API - For each SonarQube plugin there must at least one release on each LTS version of SonarQube, which means at least one release each 6 months. - No use of deprecated APIs is accepted when releasing a plugin. It raises a critical issue in SonarQube analysis. This issue can't be postponed. - No deprecated API introduced 2 major versions ago is accepted when releasing SonarQube. It raises a critical issue in SonarQube analysis. This issue can't be postponed. - An API is marked as deprecated with both: - the annotation @Deprecated the javadoc tag @deprecatedwhose message must start with "in x.y", for example: /** * @deprecated in 4.2. Replaced by {@link #newMethod()}. */ @Deprecated public void foo() { The following example shows the amount of APIs marked as deprecated during the releases 4.x: And here is the Deprecation Strategy in action where A is the name of a method: A deprecated A removed | | 4.1-> 4.2 -> 4.3 -> 5.0 -> 5.1 -> 5.2 -> 6.0 | | 4.3.1 5.2.1
https://docs.sonarqube.org/display/DEV/Versioning+and+API+Deprecation
2019-04-18T16:29:19
CC-MAIN-2019-18
1555578517745.15
[]
docs.sonarqube.org
Data in most charts can be grouped by different period. Most often these periods include day, week and month. Change period with the dropdown in top right corner of chart. FYI: There's some exceptions; Growth Rates includes week, month, quarter and year, and Overall Median Time to First Response includes hour, day, week and month.
https://docs.statbot.io/getting-started-with-statbot/change-time-period
2019-04-18T16:19:04
CC-MAIN-2019-18
1555578517745.15
[array(['https://uploads.intercomcdn.com/i/o/13150774/125ac26873e97ac091015f65/timeperiod.png', None], dtype=object) ]
docs.statbot.io
All Files Converting Exported Game Assets from XML to Binary in the following directory: HarmonySDK/Source/Utils/ This directory contains the following: macosx: Precompiled binary for macOS. win32: Precompiled binary for Windows. Xml2Bin: Xml2Bin sources. Xml2Bin/proj.mac/Xml2Bin.xcodeproj: XCode project formacOS. Xml2Bin/proj.win32/Xml2Bin.sln: Visual Studio project for Windows. The C++ code that handles the data structure can be reused and parsed in your own code if you want to integrate with other game engines.
https://docs.toonboom.com/help/harmony-15/premium/gaming/convert-export-to-binary.html
2019-04-18T17:12:06
CC-MAIN-2019-18
1555578517745.15
[]
docs.toonboom.com
#include <wx/utils.h> This structure can optionally be passed to wxExecute() to specify additional options to use for the child process. Include file: #include <wx/utils.h> The initial working directory for the new process. If this field is empty, the current working directory of this process is used. The environment variable map. If the map is empty, the environment variables of the current process are also used for the child one, otherwise only the variables defined in this map are used.
https://docs.wxwidgets.org/3.0/structwx_execute_env.html
2019-04-18T16:35:51
CC-MAIN-2019-18
1555578517745.15
[]
docs.wxwidgets.org
The Hitchhiker’s Guide to Python!¶¶ This part of the guide focuses on setting up your Python environment. - Properly Install Python Writing Great Code¶ This part of the guide focuses on best practices for writing Python code. - Structuring Your Project - Code Style - Reading Great Code - Documentation - Testing Your Code - Common Gotchas - Choosing a License Scenario Guide¶ This part of the guide focuses on tool and module advice based on different scenarios. - Network Applications - Web Applications - HTML Scraping - Command Line Applications - GUI Applications - Databases - Networking - Systems Administration - Continuous Integration - Speed - Scientific Applications - Image Manipulation - XML parsing Shipping Great Code¶ This part of the guide focuses on deploying your Python code. Development Environment¶ Additional Notes¶ This part of the guide, which is mostly prose, begins with some background information about Python, then focuses on next steps. - Introduction - The Community - Learning Python - Documentation - News Contribution notes and legal information are here (for those interested).
https://python-guide-chinese.readthedocs.io/zh_CN/latest/
2019-04-18T17:12:57
CC-MAIN-2019-18
1555578517745.15
[]
python-guide-chinese.readthedocs.io
. الملاحظات جارٍ تحميل الملاحظات...
https://docs.microsoft.com/ar-sa/graph/overview
2019-08-17T17:24:32
CC-MAIN-2019-35
1566027313436.2
[array(['images/microsoft-graph-dataconnect-connectors-800.png', 'Microsoft Graph, Microsoft Graph data connect, and Microsoft Graph connectors enable extending Microsoft 365 experiences and building intelligent apps.'], dtype=object) array(['images/microsoft-graph.png', 'An image showing the primary resources and relationships that are part of the graph'], dtype=object) ]
docs.microsoft.com
AttachmentAdd Event Occurs when an attachment has been added to an item. Subobject** | Attachment Object | AttachmentRead Event | BeforeAttachmentSave Event | Using events with Automation
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa209975%28v%3Doffice.11%29
2019-08-17T17:06:15
CC-MAIN-2019-35
1566027313436.2
[]
docs.microsoft.com
DrilldownMemberBottom (MDX) Drills down the members in a specified set that are present in a second specified set, limiting the result set to a specified number of members. Alternatively, this function also drills down on a set of tuples. Syntax DrilldownMemberBottom(Set_Expression1, Set_Expression2, Count [ , [ Numeric_Expression ][ , RECURSIVE ] ] )Bottom set according to the values of the cells represented by the set of child members, as determined by the query context. After sorting, the Drilldown bottomBottom function is similar to the DrilldownMember function, but instead of including all children for each member in the first set that is also present in the second set, the DrilldownMemberBottom function returns the bottommost number of child members for each member. Example The following example drills down into the clothing category to return the three subcategories of clothing with the bottom quantity of orders shipped. SELECT DrilldownMemberBottom ({[Product].[Product Categories].[All Products], [Product].[Product Categories].[Category].Bikes, [Product].[Product Categories].[Category].Clothing} , {[Product].[Product Categories].[Category].Clothing}, 3, [Measures].[Measures].[Reseller Order Quantity] ) ON 0 FROM [Adventure Works] WHERE [Measures].[Internet Order Quantity] See Also Reference MDX Function Reference (MDX) Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms145510%28v%3Dsql.90%29
2019-08-17T17:33:34
CC-MAIN-2019-35
1566027313436.2
[]
docs.microsoft.com
SMS Routing Configuration and Options This document discusses the steps on how to configure the routing of SMS for a DID and the various routing options. This configuration is available in the Manage Inventory section of the Manage DIDs tab from your Reseller Dashmanager. Steps to Configure SMS Routing When opening the Manage Inventory section you may click on the SMS icon found in the Actions column for any number in your inventory to configure the routing settings for SMS. If the DID is NOT SMS enabled please read the article Enabling or Disabling the SMS or MMS for a DID before proceeding with the next step. You will find in the Assign To section a dropdown menu containing the various routing options with which to use. (Each option is further described below) After completing the configuration be sure to click on the Save icon to save the configuration. After choosing an option in the previous step a new section will appear, Send Copies To, where you may set additional destinations for incoming SMS. The configuration of the option is similar to the Assign To section except that there is no option to choose a domain because it defaults to the same domain selected in the Assign To section. Note: Any of these settings can be deleted by using the Trashcan icon on the right of each configuration.on the right of each configuration. Note: The ReachUC User set in the Assign To section will be able to send and receive SMS thru their account. Any recipients for ReachUC User added in Send Copies To section will be able to receive SMS but note that replies will be sent using their own ReachUC account and not the original recipient's account. Routing Options Directory Bot This is the option when using the DID for the SMS Directory Bot (Sallie) or SMS Responder Bot service. To learn more about them and how to configure them you may go to their respective articles by opening the links below. SMS Directory Bot (Sallie) SMS Responder Bot Rocket Chat Choose this option to configure the integration between Skyswitch SMS and Rocket.Chat. Provide the Rocket.Chat incoming webhook URL in the input box provided. More steps are discussed in the article How to Use Rocket.Chat and SkySwitch SMS Integration. This option will route incoming messages for the DID as an email to the email address entered. For the domain, it is best to choose the domain for the user who owns the DID. Note that email recipients may reply to a message which will be sent as SMS but if there are pictures attached and/or part of the user’s signature, this will be sent as MMS. If MMS is not enabled for the DID, the reply will not be received. Related article: Send an email to a RUC User. ReachUC User This will point the SMS to the user for a specified domain so that any incoming messages can be received on the ReachUC app where they are logged into with their PBX credentials. To learn more you may go to ReachUC.com or to SkySwitch ReachUC Docs. Note that the user has to have logged in at least once to the ReachUC mobile app, ReachUC standalone desktop app, or WebRTC on your browser in order for it to appear in the dropdown list. URL This can be any URL that can interpret the various parameters sent by SkySwitch. Services like IFTTT and Zapier can be configured to do this. You can watch a video on how to Copy SMS Messages to a Google Spreadsheet with SkySwitch Business SMS and Zapier below.
https://docs.skyswitch.com/en/articles/706-sms-routing-configuration-and-options
2019-08-17T17:02:42
CC-MAIN-2019-35
1566027313436.2
[array(['https://cdn.elev.io/file/uploads/_eh56OxzLPUO0MmZuwVKW71SNAlSUtKi-8ltQut2Xzg/PnUsyQG55cHbXwBwKKhrrRWfCd3h0FZeSY_jjxKYbjI/1559839722807-sms.png', None], dtype=object) array(['https://cdn.elev.io/file/uploads/_eh56OxzLPUO0MmZuwVKW71SNAlSUtKi-8ltQut2Xzg/sjFJy2-u8wyUQPW9bW57OrdNwUv_3GS3LNKIcp9ZsEo/1559839723130-RTY.png', None], dtype=object) ]
docs.skyswitch.com
Persistence¶ There are three kinds of objects that are considered on the system: - Tree objects: objects are resources that implement guillotina.interfaces.IResource. This object has a __name__and a __parent__property that indicate the id on the tree and the link to the parent. By themselves they don’t have access to their children, they need to interact with the transaction object to get them. - Annotations: objects that are associated with tree objects. These can be any type of data. In Guillotina, the main source of annotation objects are behaviors. Saving objects¶ If you’re manually modifying objects in services(or views) without using the serialization adapters, you need to register the object to be saved to the database. To do this, just use the register() method. from guillotina import configure @configure.service( method='PATCH', name='@dosomething') async def matching_service(context, request): context.foobar = 'foobar' context.register() Transactions¶ Guillotina automatically manages transactions for you in services; however, if you have long running services and need to flush data to the database, you can manually manage transactions as well. from guillotina.transactions import get_tm tm = get_tm() await tm.commit() # commit current transaction await tm.begin() # start new one There is also an async context manager: from guillotina.transactions import transaction from guillotina.utils import get_database my_db = await get_database('my-db-id') async with transaction(db=my_db) as txn: # modify objects
https://guillotina.readthedocs.io/en/latest/developer/persistence.html
2019-08-17T17:05:38
CC-MAIN-2019-35
1566027313436.2
[]
guillotina.readthedocs.io
To protect the wiki against automated account creation, we kindly ask you to answer the question that appears below (more info): To pass captcha, please enter the... tenth 5th ...characters from the sequence a6a3728b03: a6a3728b03 Real name is optional. If you choose to provide it, this will be used for giving you attribution for your work. edits pages recent contributors
https://docs.joomla.org/index.php?title=Special:UserLogin&type=signup&returnto=Development+FAQ
2015-03-26T23:50:41
CC-MAIN-2015-14
1427131293283.10
[]
docs.joomla.org
Almost prevents accidental injection of variables through a register globals attack that trick the PHP file into thinking it is inside the application when it really isn't. Setting the error reporting down would have a similar effect, however there are configurations where changing PHP's INI settings aren't permitted. The JEXEC check works regardless of whether the configuration can be changed and has no other side effects (e.g. if you're debugging having every file reduce the error reporting would be annoying because you'd have to either set a debug flag to stop it or after each file is included reset error reporting, not fun!). Note, this line should NOT be included in your main index.php file, since this is the program that starts the Joomla! session.
https://docs.joomla.org/index.php?title=Why_do_most_of_the_Joomla!_PHP_files_start_with_%22defined('_JEXEC')...%3F&oldid=74124
2015-03-27T00:24:53
CC-MAIN-2015-14
1427131293283.10
[]
docs.joomla.org
For. These are the relevant member variables automatically generated on a call to getUser():>"; }.
https://docs.joomla.org/index.php?title=Accessing_the_current_user_object&diff=104603&oldid=103936
2015-03-27T00:44:49
CC-MAIN-2015-14
1427131293283.10
[]
docs.joomla.org
This page is tagged because it NEEDS REVIEW. You can help the Joomla! Documentation Wiki by contributing to it. More pages that need help similar to this one are here. NOTE-If you feel the need is satistified, please remove this notice.: Click the Parameters button to open the News Feeds Global Configuration window. This window allows you to set default parameters for News Feeds, as shown below. Filter by Partial Title You can filter the list of items either by entering.
https://docs.joomla.org/index.php?title=Help15:Screen.newsfeeds.15&diff=5914&oldid=5791
2015-03-26T23:48:11
CC-MAIN-2015-14
1427131293283.10
[]
docs.joomla.org
User Guide Local Navigation Turn on or turn off Mobile Hotspot mode The first time you turn on Mobile Hotspot mode, your smartphone might prompt you to activate your Mobile Hotspot account, and set a password for use with your Mobile Hotspot account. Before you begin: To perform this task, you must turn on your BlackBerry smartphone's Mobile Network connection and Wi-Fi connection. On the Home screen, click the connections area at the top of the screen or click the Manage Connections icon. Next topic: How to: Mobile Hotspot mode Previous topic: About Mobile Hotspot mode Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/37644/Turn_on_3G_mobile_hotspot_1323937_11.jsp
2015-03-27T00:01:20
CC-MAIN-2015-14
1427131293283.10
[]
docs.blackberry.com
Definition Example Let's see how to use Jetty 4.x (an embedded container) with a WAR to deploy in it. Example using the Ant API Starting JOnAS 5.x with a WAR to deploy: Example using the Maven2/Maven3 API Here is the plugin configuration defining a JOnAS 5.x container with a WAR to deploy: For more information... For more information about how deployment in CARGO works, please read: - How deployables work, which explains how to instanciate and personalize deployables. - How deployers work, which explains how the different deployers work.
http://docs.codehaus.org/pages/diffpages.action?pageId=229737787&originalId=228165114
2015-03-26T23:47:46
CC-MAIN-2015-14
1427131293283.10
[]
docs.codehaus.org
scipy.interpolate.UnivariateSpline¶ - class scipy.interpolate.UnivariateSpline(x, y, w=None, bbox=[None, None], k=3, s=None)[source]¶ One-dimensional smoothing spline fit to a given set of data points. Fits a spline y=s >>> import matplotlib.pyplot as pl) >>> plt.plot(x, y, '.-') >>> plt.plot(xs, ys) >>> plt.show() xs,ys is now a smoothed, super-sampled version of the noisy gaussian x,y. Methods
http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.interpolate.UnivariateSpline.html
2015-03-26T23:49:04
CC-MAIN-2015-14
1427131293283.10
[]
docs.scipy.org
- Music - Videos - Video camera - Camera - Voice notes - Tips: Media - Troubleshooting: Media - Ring tones, sounds, and alerts - Browser - Calendar - Contacts - Clock - Tasks and memos - Typing - Keyboard - Language - Screen display - GPS technology - Compass - Maps - Applications - BlackBerry ID - BlackBerry Device Software - Manage Connections - Mobile Hotspot mode - - Send a file - Send contact cards using Bluetooth technology - Rename or delete a paired Bluetooth enabled device - Make your smartphone discoverable - Bluetooth technology options -.
http://docs.blackberry.com/en/smartphone_users/deliverables/38289/1790672.jsp
2015-03-26T23:55:24
CC-MAIN-2015-14
1427131293283.10
[]
docs.blackberry.com
This is an iframe, to view it upgrade your browser or enable iframe display. Prev Chapter 3. Making Media 3.1. Making an installation DVD 3.2. Preparing a USB flash drive as an installation source 3.2.1. Making Fedora USB Media on a Windows Operating System 3.2.2. Making Fedora USB Media in UNIX, Linux, and Similar Operating Systems 3.3. Making Minimal Boot Media 3.3.1. UEFI-based systems Use the methods described in this section to create the following types of installation and boot media: an installation DVD a USB flash drive to use as an installation source a minimal boot CD or DVD that can boot the installer a USB flash drive to boot the installer The following table indicates the types of boot and installation media available for different architectures and notes the image file that you need to produce the media. Table 3.1. Boot and installation media Architecture Installation DVD Installation USB flash drive Boot CD or boot DVD Boot USB flash drive BIOS-based 32-bit x86 x86 DVD ISO image file x86 DVD ISO image file boot.iso boot.iso UEFI-based 32-bit x86 Not available BIOS-based AMD64 and Intel 64 x86_64 DVD ISO image file (to install 64-bit operating system) or x86 DVD ISO image file (to install 32-bit operating system) x86_64 DVD ISO image file (to install 64-bit operating system) or x86 DVD ISO image file (to install 32-bit operating system) boot.iso boot.iso UEFI-based AMD64 and Intel 64 x86_64 DVD ISO image file Not available Not available efiboot.img (from x86_64 DVD ISO image file) 3.1. Making an installation DVD You can make an installation DVD using the disc burning software on your computer. The exact series of steps that produces a DVD from an ISO image file varies greatly from computer to computer, depending on the operating system and disc burning software installed. Use this procedure as a general guide. You might be able to omit certain steps on your computer, or might have to perform some of the steps in a different order from the order described here. a separate piece of software for this task. Examples of popular disc burning software for Windows that you might already have on your computer include Nero Burning ROM and Roxio Creator . The Disk Utility software installed by default with Mac OS X on Apple computers has the capability to burn discs from images built into it already. Most widely-used DVD burning software for Linux, such as Brasero and K3b , also includes this capability. Download an ISO image file of a Fedora 18 disc as described in Chapter 2, Obtaining Fedora . Insert a blank, writeable disc into your computer's disc burner. On some computers, a window opens and displays various options when you insert the disc. If you see a window like this, look for an option to launch your chosen disc burning program. If you do not see an option like this, close the window and launch the program manually. Launch your disc burning program. On some computers, you can do this by right-clicking (or control-clicking) on the image file and selecting a menu option with a label like Copy image to DVD , or Copy CD or DVD image . Other computers might provide you with a menu option to launch your chosen disc burning program, either directly or with an option like Open With . If none of these options are available on your computer, launch the program from an icon on your desktop, in a menu of applications such as the Start menu on Windows operating systems, or in the Mac Applications folder. In your disc burning program, select the option to burn a DVD from an image file. For example, in Nero Burning ROM , this option is called Burn Image and is located on the File Note that you can skip this step when using certain DVD burning software; for example, Disk Utility on Mac OS X does not require it. Browse to the ISO image file that you downloaded previously and select it for burning. Click the button that starts the burning process. On some computers, the option to burn a disc from an ISO file is integrated into a context menu in the file browser. For example, when you right-click an ISO file on a computer with a Linux or UNIX operating system that runs the GNOME desktop, the Nautilus file browser presents you with the option to Write to disk . Prev 2.2. Obtaining Fedora on CD or DVD Up 3.2. Preparing a USB flash drive as an installati...
http://docs.fedoraproject.org/en-US/Fedora/18/html/Installation_Guide/sn-making-media.html
2015-03-26T23:45:34
CC-MAIN-2015-14
1427131293283.10
[]
docs.fedoraproject.org
_New. The concept of Actor_News Dataflow. Safe - a non-blocking mt-safe reference to mutable state that is inspired by "agents" in the Clojure language. Please refer to the User Guide for a more extensive coverage of these topics or head over to the Demos. Let the fun begin!
http://docs.codehaus.org/pages/viewpage.action?pageId=131727373
2015-03-26T23:53:04
CC-MAIN-2015-14
1427131293283.10
[]
docs.codehaus.org
Welcome to the Search Reference Manual Welcome to the Search Reference Manual In this manual, you'll find a reference guide for the Splunk user who is looking for a catalog of the search commands with complete syntax, descriptions, and examples for usage. If you're looking for an introduction to searching in Splunk, refer to the Search Tutorial to get you started. For more information about Splunk search, refer to the Search Manual. See the "List of search commands" in the Search Commands and Functions chapter for a catalog of the search commands, with a short description of what they do and related search commands. Each search command links you to its reference page in the Search Command Reference..
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/WhatsInThisManual
2015-03-26T23:42:35
CC-MAIN-2015-14
1427131293283.10
[]
docs.splunk.com
Development Guide Local Navigation BlackBerry Push Service The BlackBerry® Push Service implements the Push Access Protocol (PAP). The PAP specifies how push applications can send messages to mobile devices. With the BlackBerry Push Data Service (BPDS), content developers can use a PAP interface to push content to BlackBerry devices. The BPDS infrastructure works as a Push Protocol Gateway (PPG). It provides a reliable and highly secure set of services that allows content providers to push data to their client applications on the BlackBerry device. RIM also provides the BlackBerry® Push Service SDK that helps simplify push application development. It provides a programming model to help developers create client/server side push applications in enterprise and consumer environments. The SDK exposes the push functionality at a higher level of abstraction so that the developer can concentrate on what the developer wants to do and not on how to accomplish simple tasks (such as creating push messages). It also allows the developer to choose the appropriate push environment (enterprise or consumer) without changing the push application. Under normal conditions, each request to the PPG returns a response that includes a status code that indicates the result of the request. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/25167/Overview_1259222_11.jsp
2015-03-27T00:05:34
CC-MAIN-2015-14
1427131293283.10
[]
docs.blackberry.com
... This plugin imports Fortify SSC reports into SonarSonarQubeTM: - Import the Fortify Security Rating, value between 1 and 5 - Import the number of issues marked as critical, high, medium and low priority in Fortify - Link to the Fortify SSC web report - Import vulnerability issues as Sonar SonarQubeTM issues. Supported languages are ABAP, C#, C++, Cobol, Java, JavaScript, Python and VB. ... Here are some screenshots of the plugin: Installation - Install the Fortify plugin through the Update Center or download it into the SONAR SONARQUBE_HOME/extensions/plugins directory - Restart the Sonar SonarQubeTM server Usage - Configure the connection to the Fortify SSC Server in Settings > SonarQubeTM analysis. The following logs should appear: ...
http://docs.codehaus.org/pages/diffpages.action?originalId=231081485&pageId=231082076
2015-03-26T23:51:36
CC-MAIN-2015-14
1427131293283.10
[array(['/download/attachments/229740455/fortify-widget.png?version=1&modificationDate=1344861016982&api=v2', None], dtype=object) array(['/download/attachments/229740455/fortify-issues.png?version=1&modificationDate=1344862687785&api=v2', None], dtype=object) ]
docs.codehaus.org
Quickstart¶ Let’s get started with Buildozer! Init and build for Android¶ Buildozer will try to guess the version of your application, by searching a line like __version__ = “1.0.3” in your main.py. Ensure you have one at the start of your application. It is not mandatory but heavily advised. Create a buildozer.spec file, with: buildozer init Edit the buildozer.spec according to the specifications. You should at least change the title, package.name and package.domain in the [app] section. Start a Android/debug build with: buildozer -v android debug Now it’s time for a coffee / tea, or a dinner if you have a slow computer. The first build will be slow, as it will download the Android SDK, NDK, and others tools needed for the compilation. Don’t worry, thoses files will be saved in a global directory and will be shared across the different project you’ll manage with Buildozer. At the end, you should have an APK file in the bin/ directory. Run my application¶ Buildozer is able to deploy the application on your mobile, run it, and even get back the log into the console. It will work only if you already compiled your application at least once: buildozer android deploy run logcat For iOS, it would look the same: buildozer ios deploy run You can combine the compilation with the deployment: buildozer -v android debug deploy run logcat You can also set this line at the default command to do if Buildozer is started without any arguments: buildozer setdefault android debug deploy run logcat # now just type buildozer, and it will do the default command buildozer To save the logcat output into a file named my_log.txt (the file will appear in your current directory): buildozer -v android debug deploy run logcat > my_log.txt Install on non-connected devices¶ If you have compiled a package, and want to share it easily with others devices, you might be interested with the serve command. It will serve the bin/ directory over HTTP. Then you just have to access to the URL showed in the console from your mobile: buildozer serve
https://buildozer.readthedocs.io/en/latest/quickstart.html
2021-09-16T21:04:24
CC-MAIN-2021-39
1631780053759.24
[]
buildozer.readthedocs.io
2019-11-08 WordPress WP-VCD Malware via Pirated Plugins or Themes We recently received several support requests about WordPress sites that went down without any apparent reason. After investigating these issues, we found that there is an ongoing attack known as WP-VCD. This infection is spread via “nulled”, or pirated, plugins and themes distributed by a network of related sites. The Bitnami Team can confirm that none of these plugins are included in Bitnami solutions by default. So long as you did not install a “nulled” plugin or theme, your WordPress deployment is secure against this vulnerability. Once users install an infected theme or plugin downloaded from these distribution sites, their WordPress installations are hacked and taken over within seconds. The malware will execute a deployer script that injects a backdoor within all installed theme files, and resets the timestamps to match the values before the injection process to evade detection. The code snippet below was sourced from an infected “functions.php” file on a site compromised by WP-VCD. <?php if (isset($_REQUEST['action']) && isset($_REQUEST['password']) && ($_REQUEST['password'] == '2f3ad13e4908141130e292bf8aa67474')) { $div_code_name="wp_vcd"; switch ($_REQUEST['action']) { case 'change_domain'; if (isset($_REQUEST['newdomain'])) You can find more information about this attack on the Wordfence site and in other security announcements like this one. Wordfence also provides a site cleaning guide for owners of infected sites and a detailed procedure for how to secure WordPress websites to prevent future attacks. If you have further questions about Bitnami WordPress or this security issue, please post to our community forum and we will be happy to help you.
https://docs.bitnami.com/general/security/security-2019-11-08/
2021-09-16T20:54:17
CC-MAIN-2021-39
1631780053759.24
[]
docs.bitnami.com
Update Streams Individual Update Streams Fedora CoreOS (FCOS) has several individual update streams that are available to end users. They are: stable The stablestream is the most reliable stream offered with changes only reaching that stream after spending a period of time in the testingstream. testing The testingstream represents what is coming in the next stablerelease. Content in this stream is updated regularly and offers our community an opportunity to catch breaking changes before they hit the stablestream. The nextstream represents the future. It will often be used to experiment with new features and also test out rebases of our platform on top of the next major version of Fedora. The content in the nextstream will also eventually filter down into testingand on to stable. When following a stream, a system is updated automatically when a new release is rolled out on that stream. While all streams of FCOS are automatically tested, it is strongly encouraged for users to devote a percentage of their FCOS deployment to running the testing and next streams. This ensures possible breaking changes can be caught early enough that stable deployments experience fewer regressions. Switching to a Different Stream In order to switch between the different streams of Fedora CoreOS (FCOS) a user can leverage the rpm-ostree rebase command. # Stop the service that performs automatic updates sudo systemctl stop zincati.service # Perform the rebase to a different stream # Available streams: "stable", "testing", and "next" STREAM="testing" sudo rpm-ostree rebase "fedora/x86_64/coreos/${STREAM}" After inspecting the package difference the user can reboot. After boot the system will be loaded into the latest release on the new stream and will follow that stream for future updates.
https://docs.fedoraproject.org/pt/fedora-coreos/update-streams/
2021-09-16T23:01:11
CC-MAIN-2021-39
1631780053759.24
[]
docs.fedoraproject.org
State Managed Collection. On Remove Complete(Int32, Object) Method Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. When overridden in a derived class, performs additional work after the IList.Remove(Object) or IList.RemoveAt(Int32) method removes the specified item from the collection. protected: virtual void OnRemoveComplete(int index, System::Object ^ value); protected virtual void OnRemoveComplete (int index, object value); abstract member OnRemoveComplete : int * obj -> unit override this.OnRemoveComplete : int * obj -> unit Protected Overridable Sub OnRemoveComplete (index As Integer, value As Object) Parameters The zero-based index of the item to remove, which is used when IList.RemoveAt(Int32) is called. The object removed from the StateManagedCollection, which is used when IList.Remove(Object) is called. Remarks Collections derived from StateManagedCollection can override the OnRemoveComplete method to perform any additional work after an item is removed from the collection using the IList.Remove or IList.RemoveAt method.
https://docs.microsoft.com/en-gb/dotnet/api/system.web.ui.statemanagedcollection.onremovecomplete?view=netframework-4.8
2021-09-16T23:05:05
CC-MAIN-2021-39
1631780053759.24
[]
docs.microsoft.com
6/Flux Remote Procedure Call Protocol¶ This specification describes how Flux Remote Procedure Call (RPC) is built on top of request and response messages defined in RFC 3. Name: github.com/flux-framework/rfc/spec Flux RPC protocol enables broker modules, utilities, or other software communicating with a Flux instance to call the methods implemented by broker modules. Flux RPC has the following goals: Support location-neutral service addressing, without a location broker. Support a high degree of concurrency in both clients and servers Avoid over-engineered mitigations for timeouts, congestion avoidance, etc. that can be a liability in high performance computing environments. Provide a mechanism to abort in-progress RPC calls. Implementation¶ A remote procedure call SHALL consist of one request message sent from a client to a server, and zero or more response messages sent from a server to a client. The client and server roles are not mutually-exclusive— broker modules often act in both roles. +--------+ Request +--------+ | | --------------> | | | Client | | Server | | | <-------------- | | +--------+ Response +--------+ Request Message¶ Per RFC 3, the request message SHALL include a nodeid and topic string used to aid the broker in selecting appropriate routes to the server. The client MAY address the request in a location-neutral manner by setting nodeid to FLUX_NODEID_ANY, then the tree-based overlay network will be followed to the root looking for a matching service closest to the client. The request message MAY include a service-defined payload. Requests to services that send multiple responses SHALL set the FLUX_MSGFLAG_STREAMING message flag. A request MAY indicate that the response should be suppressed by setting the FLUX_MSGFLAG_NORESPONSE message flag. Response Messages¶ The server SHALL send zero or more responses to each request, as established by prior agreement between client and server (e.g. defined in their protocol specification) and determined by message flags. Responses SHALL contain topic string and matchtag values copied from the request, to facilitate client response matching. If the request succeeds and a response is to be sent, the server SHALL set errnum in the response to zero and MAY include a service-defined payload. If the request fails and a response is to be sent, the server SHALL set errnum in the response to a nonzero value conforming to POSIX.1 errno encoding and MAY include an error string payload. The error string, if included SHALL consist of a brief, human readable message. It is RECOMMENDED that the error string be less than 80 characters and not include line terminators. The server MAY respond to requests in any order. Streaming Responses¶ Services that send multiple responses to a request SHALL immediately reject requests that do not have the FLUX_MSGFLAG_STREAMING flag set by sending an EPROTO (error number 71) error response. The response stream SHALL consist of zero or more non-error responses, terminated by exactly one error response. The service MAY signify a successful “end of response stream” with an ENODATA (error number 61) error response. The FLUX_MSGFLAG_STREAMING flag SHALL be set in all non-error responses in the response stream. The flag MAY be set in the final error response. Matchtag Field¶ RFC 3 provisions request and response messages with a 32-bit matchtag field. The client MAY assign a unique (to the client) value to this field, which SHALL be echoed back by the server in responses. The client MAY use this matchtag value to correlate responses to its concurrently outstanding requests. Note that matchtags are only unique to the client. Servers SHALL NOT use matchtags to track client state unless paired with the client UUID. The client MAY set matchtag to FLUX_MATCHTAG_NONE (0) if it has no need to correlate responses in this way, or a response is not expected. The client SHALL NOT reuse matchtags in a new RPC unless it is certain that all responses from the original RPC have been received. A matchtag MAY be reused if a response containing the matchtag arrives with the FLUX_MSGFLAG_STREAMING message flag clear, or if the response contains a non-zero error number. Exceptional Conditions¶ If a request cannot be delivered to the server, the broker MAY respond to the sender with an error. For example, per RFC 3, a broker SHALL respond with error number 38 “Function not implemented” if the topic string cannot be matched to a service, or error number 113, “No route to host” if the requested nodeid cannot be reached. Although overlay networks use reliable transports between brokers, exceptional conditions at the endpoints or at intervening broker instances MAY cause messages to be lost. It is the client’s responsibility to implement any timeouts or other mitigation to handle missing or delayed responses. Disconnection¶ If a client aborts with an RPC in progress, it or its proxy SHOULD send a request to the server with a topic string of “service.disconnect”. The FLUX_MSGFLAG_NORESPONSE message flag SHOULD be set in this request. It is optional for the server to implement the disconnect method. If the server implements the disconnect method, it SHALL cancel any pending RPC requests from the sender, without responding to them. The server MAY determine the sender identity for any request, including the disconnect request, by reading the first source-address routing identity frame (closest to routing delimiter frame) from the request message. Servers which maintain per-request state SHOULD index it by sender identity so that it can be removed upon receipt of the disconnect request. Cancellation¶ A service MAY implement a method which allows pending requests on its other methods to be canceled. If implemented, the cancellation method SHOULD accept a JSON object payload containing a “matchtag” key with integer value. The sender of the cancellation request and the matchtag from its payload MAY be used by the service to uniquely identify a single request to be canceled. The client SHALL set the FLUX_MSGFLAG_NORESPONSE message flag in the cancellation request and the server SHALL NOT respond to it. If the canceled request did not set the FLUX_MSGFLAG_NORESPONSE message flag, the server SHOULD respond to it with error number 125 (operation canceled).
https://flux-framework.readthedocs.io/projects/flux-rfc/en/latest/spec_6.html
2021-09-16T21:54:58
CC-MAIN-2021-39
1631780053759.24
[]
flux-framework.readthedocs.io
With this feature, you will be always on top of your inventory since we will notify you through email when any of your products is near to be out of stock. You can choose the limit when you want to be notified and you will receive the email automatically. For you to activate it, you just need to follow the next steps: Go to your app dashboard, click on Account and then click on Activity Updates. On the Out of Stock Alerts tab, click on Receive out of stock alerts to enable the notifications. You will receive them on the email address registered with Shopify. Type the quantity at which you would get notified when products are about to go out of stock. Click on save button
https://docs.appikon.com/en/articles/4717914-what-are-out-of-stock-alerts-and-how-can-i-access-them
2021-09-16T21:03:07
CC-MAIN-2021-39
1631780053759.24
[array(['https://downloads.intercomcdn.com/i/o/318049341/2560301e44eeb3801fa2adef/image.png', None], dtype=object) ]
docs.appikon.com
To welcome users to Community Central, we have provided some sample text. To edit this text, simply click the Edit Page button in the Page ribbon and enter new text in the Content Editor Web Part. When you are finished, click the Stop Editing button. NOTE: You must have the Add and Customize Pages permission in SharePoint to edit the welcome text. This permission is included in the SharePoint Design and Full Control permissions levels and in the Community Central Moderators and Administrators permission levels. See also:
https://docs.bamboosolutions.com/document/welcome_text/
2021-09-16T22:27:47
CC-MAIN-2021-39
1631780053759.24
[]
docs.bamboosolutions.com
Understand the message "AWS instance scheduled for retirement" This message is usually related to hardware maintenance by Amazon. It can be resolved by stopping and starting your server, which will move it to different hardware and fix the problem. It is highly recommended to create a backup before stopping the server to ensure you can recover your data if something goes wrong in the process. IMPORTANT: Note that you should not “restart” the server but instead separately “stop” and “start” it or perform a “hard reboot”, since a normal reboot would not change the hardware.
https://docs.bitnami.com/aws/faq/administration/understand-instance-retirement/
2021-09-16T20:55:11
CC-MAIN-2021-39
1631780053759.24
[]
docs.bitnami.com
confluent local log¶ Description¶ View a snapshot or tail the log of a service. Important The confluent local commands are intended for a single-node development environment and are not suitable for a production environment. The data that are produced are transient and are intended to be temporary. For production-ready workflows, see Install and Upgrade. confluent local log <service> -- [<argument>] --path <path-to-confluent> Flags¶"
https://docs.confluent.io/5.3.1/cli/command-reference/confluent-local/confluent_local_log.html
2021-09-16T21:08:41
CC-MAIN-2021-39
1631780053759.24
[]
docs.confluent.io
Windows! Technorati Tags: Windows 7,Windows Virtual PC,Windows XP Mode,Application Compatibility,Windows Upgrade,VPC,VHD,VM,Virtual Machine Prasad Saripalli Principal Program Manager Microsoft Virtualization Team
https://docs.microsoft.com/en-us/archive/blogs/windows_vpc/windows-virtual-pc
2021-09-16T21:08:57
CC-MAIN-2021-39
1631780053759.24
[]
docs.microsoft.com
UIElement. Is Stylus Captured Changed Event Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Occurs when the value of the IsStylusCaptured property changes on this element. public: event System::Windows::DependencyPropertyChangedEventHandler ^ IsStylusCapturedChanged; public event System.Windows.DependencyPropertyChangedEventHandler IsStylusCapturedChanged; member this.IsStylusCapturedChanged : System.Windows.DependencyPropertyChangedEventHandler Public Custom Event IsStylusCapturedChanged As DependencyPropertyChangedEventHandler Event Type Remarks This member is a CLR event, not a routed event.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.isstyluscapturedchanged?view=netframework-4.7.2
2021-09-16T23:34:55
CC-MAIN-2021-39
1631780053759.24
[]
docs.microsoft.com
Get-Applibdbschema¶ Gets SQL scripts to create or maintain the database schema for the Citrix AppLibrary Service. Syntax¶ Get-AppLibDBSchema [-DatabaseName <String>] [-ServiceGroupName <String>] [-ScriptType <ScriptTypes>] [-LocalDatabase] [-Sid <String>] [-BearerToken <String>] [-AdminAddress <String>] [<CommonParameters>] Detailed Description¶ Gets SQL scripts that can be used to create a new AppLibrary Service database schema, add a new AppLibrary Service to an existing site, remove a AppLibrary Service from a site, or create a database server logon for a AppLibrary Service. If no Sid parameter is provided, the scripts obtained relate to the currently selected AppLibrary Service instance, otherwise the scripts relate to AppLibrary Service instance running on the machine identified by the Sid provided. When obtaining the Evict script, a Sid parameter must be supplied. The current service instance is that on the local machine, or that explicitly specified by the last usage of the -AdminAddress parameter to a AppLibrary SDK cmdlet. The service instance used to obtain the scripts does not need to be a member of a site or to have had its database connection configured. The database scripts support only Microsoft SQL Server, or SQL Server Express, and require Windows integrated authentication to be used. They can be run using SQL Server's SQLCMD utility, or by copying the script into an SQL Server Management Studio (SSMS) query window and executing the query. If using SSMS, the query must be executed in 'SMDCMD mode'. The ScriptType parameter determines which script is obtained. If ScriptType is not specified, or is FullDatabase or Database, the script contains: Creation of service schema Creation of database server logon Creation of database user Addition of database user to AppLibrary Service roles If ScriptType is Instance, the returned script contains: Creation of database server logon Creation of database user Addition of database user to AppLibrary Service roles If ScriptType is Evict, the returned script contains: Removal of AppLibrary The scripts returned support Microsoft SQL Server Express Edition, Microsoft SQL Server Standard Edition, and Microsoft SQL Server Enterprise Edition databases only, and are generated on the assumption that integrated authentication will be used. If the ScriptType parameter is not included or set to 'FullDatabase' or 'Database', the full database script is returned, which will: Create the database schema. Create the user and the role (providing the schema does not already exist). Create the logon (providing the schema does not already exist). If the ScriptType parameter is set to 'Instance', the script will: Create the user and the role (providing the schema does not already exist). Create the logon (providing the schema does not already exist) and associate it with a user. If the ScriptType parameter is set to 'Login', the script will: Create the logon (providing the schema does not already exist) and associate it with a pre-existing user of the same name. If the LocalDatabase parameter is included, the NetworkService account will be added to the list of accounts permitted to access the database. This is required only if the database is run on a controller. If the command fails, the following errors can be returned. Error Codes ----------- GetSchemasFailed The database schema could not be found. ActiveDirectoryAccountResolutionFailed The specified Active Directory account or Group could not be found.DBSchema -DatabaseName MySiteDB -ServiceGroupName MyServiceGroup > C:\AppLibrarySchema.sql Description¶ Gets a script to create the full database schema for the Citrix AppLibrary Service and copies it to a file called "C:\AppLibrarySchema.sql"<br>This script can be used to create the service schema in a database with name "MySiteDB", which must already exist, and must not already contain a AppLibrary service schema. Example 2¶ C:\PS>Get-AppLibDBSchema -DatabaseName MySiteDB -ScriptType Login > C:\AppLibraryLogins.sql Description¶ Gets a script to create the appropriate database server logon for the AppLibrary service. This can be used when configuring a mirror server for use.
https://developer-docs.citrix.com/projects/delivery-controller-sdk/en/latest/AppLibrary/Get-AppLibDBSchema/
2021-09-16T21:44:52
CC-MAIN-2021-39
1631780053759.24
[]
developer-docs.citrix.com
Date: Sat, 3 Dec 2011 01:28:49 -0500 From: APseudoUtopia <[email protected]> To: [email protected] Subject: ZFS Filesystems wont auto-mount on boot Message-ID: <CAKOHg=O-immXozBytrqO-dFuKN-OT=O04zGz7m9JK3Rrcva=Ww@mail.gmail.com> Next in thread | Raw E-Mail | Index | Archive | Help data visible. Here's the boot log: Trying to mount root from zfs:root []... Dec 3 01:23:07 init: login_getclass: unknown class `daemon` cannot open /etc/rc: No such file or directory? Thank you! Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=798581+0+archive/2011/freebsd-questions/20111204.freebsd-questions
2021-09-16T22:45:17
CC-MAIN-2021-39
1631780053759.24
[]
docs.freebsd.org
1 Introduction An end event defines the location where the flow will stop. If the return type of the flow is not Nothing, a return value should be specified. If you want to stop your flow after an activity, you link the activity, using a sequence flow with a stop event. In this case, the flow is called from another flow that expects the buyer to be returned. 2 Behavior Properties 2.1 Return value The return value is the value that is returned to the flow that called the current flow. The value can be entered as an expression.
https://docs.mendix.com/refguide7/end-event
2021-09-16T22:36:40
CC-MAIN-2021-39
1631780053759.24
[array(['attachments/819203/917940.png', None], dtype=object)]
docs.mendix.com
Ultrasound sensors are a useful way of measuring distance. Ultrasound sensors communicate with the kit using two wires. A signal is sent to the sensor on the trigger pin, and the length of a response pulse on the echo pin can be used to calculate the distance. Ultrasound should only be considered accurate up to around two metres, beyond which the signal can become distorted and produce erroneous results. The sensor has four pin connections: ground, 5V (sometimes labelled vcc), trigger and echo. Most ultrasound sensors will label which pin is which. The ground and 5V should be wired to the ground and 5V pins of the Arduino respectively. The trigger and echo pins should be attached to two different digital IO pins. Take note of these two pins, you’ll need them to use the sensor. If the sensor always returns a distance of zero, it means the trigger and echo pins are connected the wrong way! Either change the pin numbers in the code, or swap the connections.
https://docs.sourcebots.co.uk/kit/servo-assembly/ultrasound/
2021-09-16T21:02:04
CC-MAIN-2021-39
1631780053759.24
[]
docs.sourcebots.co.uk
Tables (Available in all TurboCAD Variants) Default UI Menu: Draw/Table Ribbon UI Menu: Insert Table enables you to insert an empty table, and Modify Table enables you to add or edit text in the table. Insert Table is available on the Draw menu, and Modify Table is available on the Modify menu. Both tools are also available on the Text toolbar. You can display the Text toolbar by right-clicking on any toolbar area and selecting Text. Changing Rows and Columns, Merging Cells Other than adding text or changing individual cell properties, which are done using the Modify Text tool, table changes are made with the Edit Tool. Note: For details on the Edit Tool, - To start editing, activate the Edit Tool and click the table. You can move any of the nodes to change sizes of single rows or columns. - To add a row or column, press Shift and click the cell to the left or above where the new item will go. Note: If you want to Shift-select a new cell, you must first use Shift and click to de-select the current cell. - Insert Row and Insert Column are available on the local menu or Inspector Bar. The new row is added below the selected cell. - To remove a row or column, Shift-select a cell in that row or column. The row is removed. - To merge cells, Shift-select each cell you want to merge. - Select Merge Cells in the local menu or Inspector Bar. The selected cells are now one cell. - To separate them again, Shift-select the cell and select Unmerge. Local Menu & Keyboard The local menu provides additional controls and lists the complementary keyboard commands for those functions. You can change and expand or contract the cells which are selected. most importantly you can opt to edit the cells text or block. Insert Table Default UI Menu: Draw/Table/Table Ribbon UI Menu: Before creating a table, you can define a table style .. Specify Insert Point With this method, you click the top left point of the table. - Make sure Specify Insert Point is selected in the local menu or Inspector Bar. The number of columns and rows, as well as column width and row lines (number of lines of text) is set in the Inspector Bar. - Click once in the file, and the table is inserted. - If you don't want to insert another table, press Esc or start a new tool. - If you need to change the size of an individual row or column, use the Edit Tool. Specify Window. Fixed Number of Rows - Columns - Activate both Calculated Row Height and Calculated Column width. - Click once to set the top left corner of the table. - Set the number of columns and rows. - Move the mouse to size the table. The number of cells remains the same, no matter how the table is sized. - Click the second corner to insert the table. If you specified a table style, it will be applied. - If you need to change the size of an individual row or column, use the Edit Tool. Fixed Cell Size - Turn off both Calculated Row Height and Calculated Column width. - Click once to set the top left corner of the table. - Set the column width and number of text lines per row. - Move the mouse to size the table. The cell size remains the same, no matter how the table is sized. Cells are added or removed as needed. - Click the second corner to insert the table. If you specified a table style, it will be applied. - If you need to change the size of an individual row or column, use the Edit Tool. Modify Table Default UI Menu: Draw/Table/Modify Table Ribbon UI Menu: Modify Table is used to add text to cells, or to edit existing cell text. It can also be used to change properties of individual cells. Adding or Editing Cell Text If you are using styles, the table style will refer to a text style for each type of text (data, header, and title). So unless you want to use standard text, define text styles first. Note: For details on text styles, Open the table style you want to use, and specify the text style you want to use for Data text. You can also set text color and height here. There are also categories for Header and Title, which can each have their own text style. - To add text to the table, activate Modify Table. - Make sure Edit Cell Text is active in the local menu or Inspector Bar. - Select the table you want to modify, then click in a cell where you want to place text. - Click in each cell and select Edit Text from the Inspector bar or Local menu. Then type the text.If your drawing has blocks use Insert block you can also opt to insert a block into the cell. - If you want to remove text from a cell, click it and select Clear Cell Content from the local menu or Inspector Bar. - When the table text is complete, end Modify Text. The properties of the cell text will match their text styles. Changing Cell Properties - If you want to use Modify Text to change properties of an individual cell, make sure Edit Cell Text is not active. - Click the cell you want to change. - The properties of the cell can be changed in the Selection Info palette. In this example, the cell's fill color was changed. - When finished, the cell has the new properties. Text Rotation: TurboCAD table now supports the "Text Rotation" property. Now table cell has editable property - Text Rotation. Also, cell editing has been improved. Now the cell selection in not lost after changing the properties of the cell using the "Selection Info Palette". Table Export (*.xlsx) Default UI Menu: Draw/Table/Table Export Ribbon UI Menu: This tool allows you to save a table as an Excel (.xlsx) file. To save a XLSX file:* - Select the tool. - Click on a table. - Type in a File name. - Click the Save button. - Press the Space bar to finish. Table Import (*.xlsx).
http://docs.imsidesign.com/projects/TurboCAD-2019-User-Guide-Publication/TurboCAD-2019-User-Guide/Database-Tables-and-Reports/Tables/
2021-09-16T22:26:13
CC-MAIN-2021-39
1631780053759.24
[array(['../../Storage/turbocad-2019-user-guide-publication/tables-2019-02-15-1.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/tables-2019-02-15-2.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0003.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0004.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0005.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0006.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0007.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0008.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0009.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0010.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0011.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0012.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0013.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/tables-2019-02-15-3.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0015.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0016.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0017.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0018.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0019.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0020.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0021.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0022.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0023.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0024.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0025.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0026.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0027.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0028.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0029.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0030.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0031.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/tables-2019-02-15-4.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0033.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0034.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0035.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0036.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0037.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0038.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0039.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0040.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0041.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0042.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0043.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0044.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/tables-2019-02-15-5.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/tables-2019-02-15-6.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0047.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0048.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/tables-2019-02-15-7.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0050.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0051.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0052.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0053.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0054.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0055.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/13-3-tables-img0056.png', 'img'], dtype=object) ]
docs.imsidesign.com
Trimmer Bonsai offers support for automatically removing old indices from your cluster to save space. If you’re using Bonsai to host logs or some application that creates regular indices following a time-series naming convention, then you can specify one or more prefix patterns and Bonsai will automatically purge the oldest indices that match each pattern. The pattern will only match from the start of the index name. This feature is found in the Cluster dashboard, under the Trimmer section: For example, assume you’re indexing time-series data, say, number of support tickets in a day. You’re putting these in time-series indices like "support_tickets-201801010000", "support_tickets-201801020000", and so on. With this feature, you could specify a pattern like "support_tickets-", and we’ll auto-prune the oldest indices first when you reach the size limit specified for the pattern. Indices scheduled for removal will be highlighted in red. Please note we will not purge the last remaining index that matches the pattern, even if the size is above the limit. Note on Trimmer and deleting documents The trimmer feature only allows you to delete whole indexes and only if there are more than one with the same trimmer pattern. The trimmer does not delete just certain documents in an index. To remove a number of documents in an index your best option is to use delete_by_query. Here is an example for deleting the 50 oldest documents, according to a hypothetical "date" field: POST /<index name>/_delete_by_query?conflicts=proceed { "query": { "match_all": {} }, "sort": { "_script": { "type": "date", "order": "asc" } }, "size": 50 } The Elasticsearch Search API can be used to identify documents to delete. Please use your Console (or an equivalent tool) and build your query using the Search API first, and verify that you will only be deleting the documents you want. Do this before running the delete_by_query call against your query.
https://docs.bonsai.io/article/332-trimmer
2021-09-16T22:36:46
CC-MAIN-2021-39
1631780053759.24
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5ea1d54b04286364bc98e61e/file-j07wq3VRIM.png', None], dtype=object) ]
docs.bonsai.io
Getting Started Get up and running in observIQ in just a few minutes Welcome to observIQ! Welcome to observIQ! Follow the steps below to ship your logs to observIQ, in just a few minutes. 1. Add an Agent After signing in to observIQ, you'll be taken to our Onboarding page. Choose your platform, copy and run the installation command on the host you'd like to gather logs from. For specific information on setting up different platforms, visit Linux/Windows/MacOS or Kubernetes/Openshift After running the installation command, your agent will appear in the table below. Next, click 'Add More Sources'. 2. Add a Source to your Agent Next, choose your Source from the list (note: if you don't see your desired source on the list, use one of our Generic sources or reach out to our Customer Success team). Verify the configuration parameters match your system configuration, then click Create. See Add a Source to your Agent for more information. 3. Explore your Logs After adding a Source to your agent, click 'Explore' to be starting diving into your logs. You can sort, search and visualize your logs using Dynamic Filters, pre-made Dashboards and more. Learn more with View your logs. Next Steps Now that your logs are flowing to observIQ, you can dive into observIQ's rich feature set: - Dashboards: Visualize your data with pre-built dashboards or use visualizations to create your own. - Alerts: Create alerts and notifiers with email, Slack, or PagerDuty. - Live Tail: Use Live Tail to debug your applications and infrastructure in real-time. Updated 9 days ago
https://docs.observiq.com/docs/getting-started
2021-09-16T22:37:38
CC-MAIN-2021-39
1631780053759.24
[array(['https://files.readme.io/c8f9be7-Agent_Install.png', 'Agent_Install.png'], dtype=object) array(['https://files.readme.io/c8f9be7-Agent_Install.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/30da2bb-Source_Select.png', 'Source_Select.png'], dtype=object) array(['https://files.readme.io/30da2bb-Source_Select.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/6edd71a-Explore_Page.png', 'Explore_Page.png'], dtype=object) array(['https://files.readme.io/6edd71a-Explore_Page.png', 'Click to close...'], dtype=object) ]
docs.observiq.com
Install on Linux - Installers - Compatibility Reports - Requirements - Manual Install and Launching - Security-Enhanced Linux (SELinux) - Desktop Entry - Arch Linux - Troubleshooting - Uninstall This page covers how to install and uninstall the Portmaster on Linux. Installers We provide package installers for supported systems: .debfor Debian/Ubuntu (how to) PKGBUILDfor Arch .pkg.tar.xzfor Arch (Testing, CI Build / how to) Please note that we only support the latest stable and LTS versions. We may be able to help out with other systems, but will not be able to invest a lot of time in order to keep focus. The installers should take care of any needed dependencies. Please report back if they do not! Please note that the Portmaster updates itself and that the provided packages are only meant for an initial install. Uninstalling the package from your system will properly uninstall and remove the Portmaster. Compatibility Reports Help make the Portmaster better for everyone by reporting your experience on different Linux distros. Linux Kernel Desktop Environments Requirements The Portmaster Core Service is compatible with the Linux Kernel as of version 2.4, but due to a breaking bug in at least v5.6, we recommend to use v5.7+. Dependencies: libnetfilter_queue- for network stack integration libappindicator3- for sending desktop notifications (optional, but recommended) - Network Manager - for better integration (optional, but recommended) Debian/Ubuntu sudo apt install libnetfilter-queue1 libappindicator3-1 You may need to enable the universe or multiverse repositories sources on Ubuntu. Fedora sudo yum install libnetfilter_queue Arch sudo pacman -S libnetfilter_queue libappindicator-gtk3 Manual Install and Launching 0. Install dependencies. 1. Download the latest portmaster-start utility and initialize all resources: # Create portmaster data dir mkdir -p /var/lib/portmaster # Download portmaster-start utility wget -O /tmp/portmaster-start sudo mv /tmp/portmaster-start /var/lib/portmaster/portmaster-start sudo chmod a+x /var/lib/portmaster/portmaster-start # Download resources sudo /var/lib/portmaster/portmaster-start --data /var/lib/portmaster update All data is saved in /var/lib/portmaster. The portmaster-start utility always needs to know where this data directory is. 2. Start the Portmaster Core Service sudo /var/lib/portmaster/portmaster-start core 3. Start the Portmaster UI /var/lib/portmaster/portmaster-start app 4. Start the Portmaster Notifier /var/lib/portmaster/portmaster-start notifier Your Desktop environment may not (yet) be compatible. 5. Start it on boot In order to get the Portmaster Core Service to automatically start when booting, you need to create a systemd service unit at /etc/systemd/system/portmaster.service. The following unit file works but excludes most of the security relevant settings. For a more restricted version use this portmaster.service file. [Unit] Description=Portmaster Privacy App [Service] Type=simple ExecStart=/var/lib/portmaster/portmaster-start core --data=/var/lib/portmaster/ ExecStopPost=-/sbin/iptables -F C17 ExecStopPost=-/sbin/iptables -t mangle -F C170 ExecStopPost=-/sbin/iptables -t mangle -F C171 ExecStopPost=-/sbin/ip6tables -F C17 ExecStopPost=-/sbin/ip6tables -t mangle -F C170 ExecStopPost=-/sbin/ip6tables -t mangle -F C171 [Install] WantedBy=multi-user.target Finally, reload the systemd daemon and enable/start the Portmaster: sudo systemctl daemon-reload sudo systemctl enable --now portmaster 6. Enjoy! Security-Enhanced Linux (SELinux) If you are running with SELINUX=enforcing you probably were not successful with running the Portmaster and might see the following error in your journalctl -u portmaster: dub 16 22:09:10 dev-fedora systemd[1]: Started Portmaster Privacy App. dub 16 22:09:10 dev-fedora systemd[30591]: portmaster.service: Failed to execute command: Permission denied dub 16 22:09:10 dev-fedora systemd[30591]: portmaster.service: Failed at step EXEC spawning /var/lib/portmaster/portmaster-start: Permission denied dub 16 22:09:10 dev-fedora systemd[1]: portmaster.service: Main process exited, code=exited, status=203/EXEC This happens because SELinux will not allow you to run a binary from /var/lib/portmaster as systemd service. For this to work you need to change the SELinux security context type of portmaster-start binary using the following command: sudo chcon -t bin_t /var/lib/portmaster/portmaster-start Now you can restart the portmaster service and check that the portmaster started up successfully by running: systemctl restart portmaster systemctl status portmaster Desktop Entry To find and launch the Portmaster from within your desktop environment you need to create a file with metadata which tells your system how to run the Portmaster, which icon it should display in the taskbar, etc. The easiest way to do this on other distributions is to download the latest desktop entry and png icon from the portmaster-packaging repository: sudo wget -O /usr/local/share/applications/portmaster.desktop sudo wget -O /usr/share/pixmaps/portmaster.png Right after you download both files the Portmaster should appear in your system search with an icon. If you still cannot see the Portmaster icon, please check whether the portmaster-start path in the desktop entry matches the path of your installation. Arch Linux For Arch users we provide a PKGBUILD file in the portmaster-packaging repository. It is not yet submitted to AUR as we want to collect some feedback first. To install the Portmaster using the PKGBUILD, follow these steps: # Install build-dependencies, you can remove them later: sudo pacman -S imagemagick # required to convert the Portmaster logo to different resolutions # Install runtime dependencies: sudo pacman -S libnetfilter_queue webkit2gtk # Clone the repository git clone # Enter the repo and build/install the package (it's under linux/) cd portmaster-packaging/linux makepkg -i # Start the Portmaster and enable autostart sudo systemctl daemon-reload sudo systemctl enable --now portmaster Troubleshooting Check if the Portmaster Is Running You can check if the Portmaster system service is actually running or if it somehow failed to start by executing the following command: sudo systemctl status portmaster This should show something like active (running) since <start-time>. Please also check if the start time seems reasonable. If it seems strange, try looking at the logs. Starting And Stopping the Portmaster If you encounter any issues you might want to (temporarily) stop the Portmaster. You can do this like this: # This will stop the portmaster until you reboot. sudo systemctl stop portmaster # This will disable automatically starting the Portmaster on boot. sudo systemctl disable portmaster Changing the Log Level When debugging or troubleshooting issues it is always a good idea to increase the debug output by adjusting the Log Level . Accessing the Logs Portmaster logs can either be viewed using the system journal or by browsing the log files in /var/lib/portmaster/logs. In most cases, the interesting log files will be in the core folder. # View logs of the Portmaster using the system journal. sudo journalctl -u portmaster # You can also specify a time-range for viewing. sudo journalctl -u portmaster --since "10 minutes ago" Debugging Network Issues Due to the Portmaster being an Application Firewall it needs to deeply integrate with the networking stack of your operating system. That means that “no network connectivity” might be caused at different points during connection handling. The following steps will help you to figure out where the actual issue comes from. Please include any output of the below commands in any related issues as it is very valuable in debugging your problem. 1. Check if the Portmaster Is Actually Up and Running 2. Test Direct Network Connectivity The Portmaster includes a local DNS resolver to provide its monitoring and some filtering capabilities. In order to track down the issue, connect directly to an IP address. Should this work, this would indicate that there is a problem with the Portmaster’s DNS resolver. # Check if a ping message succeeds. # The Portmaster currently always allows ping messages. ping 1.1.1.1 # Check if an HTTP request succeeds. # In case of an error, look for "curl" in the network monitor of the Portmaster. curl -I 1.1.1.1 # Or use wget to check if an HTTP request succeeds. # In case of an error, look for "wget" in the network monitor of the Portmaster. wget -S -O /dev/null 1.1.1.1 3. Test DNS Resolving If the above step works the issue most likely resides somewhere at the DNS resolving level. To confirm, please try the following: # Check if a DNS requests suceeds. # In case of an error, look for "dig" in the network monitor of the Portmaster. dig one.one.one.one dig wikipedia.org # Or use nslookup to check if a DNS requests suceeds. # In case of an error, look for "nslookup" in the network monitor of the Portmaster. nslookup one.one.one.one nslookup wikipedia.org No Network Connectivity After the Portmaster Stops In case of a rapid unscheduled shutdown, the Portmaster may sometimes fail to cleanup its iptables rules and thus break networking. To work around this either use the recommended systemd service unit included in our installers or execute the following commands: sudo /var/lib/portmaster/portmaster-start recover-iptables Uninstall Uninstalling the portmaster package from your system will properly uninstall and remove the Portmaster. GUI Most distros will have a graphical software and package manager. You can easily find it by opening the “Start Menu” and searching for “software”. Debian/Ubuntu sudo apt purge portmaster Arch sudo pacman -Rnsu portmaster
https://docs.safing.io/portmaster/install/linux
2021-09-16T22:53:40
CC-MAIN-2021-39
1631780053759.24
[]
docs.safing.io
Tcl8.6/Tk8.6 Documentation > Tk C API, version 8.6.8 > GetJustify - DESCRIPTION - KEYWORDS NAM -Tk_GetJustifyFromObj places in *justifyPtr the justify value corresponding to objPtr's value. This value will be one of the following: - TK_JUSTIFY_LEFT - Means that the text on each line should start at the left edge of the line; as a result, the right edges of lines may be ragged. - TK_JUSTIFY_RIGHT - Means that the text on each line should end at the right edge of the line; as a result, the left edges of lines may be ragged. - TK_JUSTIFY_CENTER - Means that the text on each line should be centered; as a result, both the left and right edges of lines may be ragged. Undercenter, fill, justification, string
http://docs.activestate.com/activetcl/8.6/tcl/TkLib/GetJustify.html
2018-12-10T02:25:20
CC-MAIN-2018-51
1544376823236.2
[]
docs.activestate.com
1. Overview A typical enPortal deployment will consist of: - application configuration: users, domains, provisioning information, etc. - custom assets on the file system such as configuration files. - external supporting pieces required for operation but not strictly part of enPortal. enPortal provides a configurable export system which will automatically capture all system configuration (1) and much of the custom assets (2). If custom assets are not captured then the export configuration can be modified to include them which would be considered a deployment best practice. Exporting the external supporting pieces (3) is beyond the scope of the product itself but should be considered from an overall perspective. The reasons for performing backups are: - adhere to backup maintenance good practices. - provide a mechanism to migrate configuration from one environment to another (for example from development, to staging, to production). - use a backup when upgrading enPortal itself. - provide a way for Edge to further investigate support issues - a backup archive may be requested. 2. Creating Backups To create backups, hover your mouse over the Advanced tab and select the Backup option. The Backup page allows the user to create new backups, and download or delete existing backups on the server file system. Select the Create button to create a new backup. Multiple backup options will be presented. 2.1. Backup Options - Backup AppBoard Option The Backup AppBoard option backs up only AppBoard components, specifically: Stacks, Data Collections, and Data Sources. Everything else including server configuration, users, domains, roles, stack assignment, managed variables, and all enPortal specific custom export properties are not backed up. - Backup All Option Use the Backup All option for full system backups. - Backup Portal Option Use the Backup Portal option to export all enPortal configuration in addition to most of the server configuration. 2.2. Customizing the Export Recommended best practice it may exist for other purposes already. The key part is to define an export.custom.other property with a list of rules defining the files to export..34.jar;\ ${webapp.xmlroot}/appboard/config/iconregistry/custom_icons.csv;\ ${webapp.home}/visualizer/assets/images/custom_banner_logo.png;\ ${webapp.webinf}/test_dir;\ ${webapp.webinf}/image_dir,,,.*\.png In this example a DB driver, a icon registry file, a custom graphic, and the test_dir are specific paths to be exported. The last example specifies a path and file expression so that only files ending with .png are included. The format for each export rule is noted below. The initial Path is not a regular expression and must match exactly a single file or directory. PathExpression and FileExpression are regular expressions, not wildcards. The use of the exclude keyword is optional and implies the preceding expression should be excluded. <Path>,<PathExpression>,exclude,<FileExpression>,exclude,<FileExpression>,exclude,... After making changes to the set of excludes perform a backup and verify the resulting (.jar) archive contains the desired set of files. The archive can be uncompressed using unzip, depending on the unzip tool it may need to be renamed to end with .zip. 2.3. Unattended Backups The backup mechanism is only accessible via the web interface. To perform an unattended backup, create a custom script to authenticate and call the following URL to generate the backup (example below calls the Backup All option): /enportal/servlet/pd/vdir/home/role/portalAdministration/Menu/Admin/Advanced/Backup?requestType=execute&Submit=true 3. Loading Backups (archives) Loading backup archives is completed on the command line and the enPortal server must be shutdown beforehand. Before proceeding with loading a backup, please be aware that this process is disruptive and will replace the existing configuration. For example, loading a Backup All archive will replace all existing enPortal and AppBoard content and configuration settings. To load an archive: - shutdown enPortal - in a terminal, change into the [INSTALL_HOME]/server/bin directory - run: portal <Load_Type> -jar <backup_archive.jar> - (Linux / UNIX) re-run post_install.sh This is required to ensure correct system configuration and file ownership, permissions, and any included scripts are set executable as jar archives do not store this information. - start enPortal The applicable Load Types are defined below: file system without resetting the configuration database. All AppBoard content will be replaced however. The following steps are also required to ensure data sources are loaded correctly: - Shutdown.
http://docs.edge-technologies.com/docs/enportal/5.5/admin/enPortal_installation/backup_and_recovery
2018-12-10T02:00:54
CC-MAIN-2018-51
1544376823236.2
[array(['/edge-docs/images/0/07/Template-warning.png', None], dtype=object) array(['/edge-docs/images/b/bd/Template-note.png', None], dtype=object)]
docs.edge-technologies.com
EFM32 Gecko MCU and Peripheral Software Documentation Release Notes | Downloads - CMSIS-CORE Device headers for the EFM32 Gecko - EMLIB Peripheral Library - EnergyAware Driver Library - Platform Middleware - Board Support Package - Kit Driver Library Please also see Simplicity Studio for precompiled demo applications, application notes and software examples. Related Documentation
https://docs.silabs.com/mcu/5.4/efm32g/
2018-12-10T02:55:35
CC-MAIN-2018-51
1544376823236.2
[]
docs.silabs.com
Personalize Web Push Message¶ Most of the web push providers send simple notification messages to all their users. However, this is not the most effective all the time. Frontuser allows you to utilized Matrix object value inside notification message body & title as placeholders. They can be substituted with matrix object values for each user. Why should you personalize web push?¶ To get the most out of Web Push Notifications, it needs to be relevant at a user level. A relevant message is one about the subject the user cares about. Your personalized web push messages increase CTR by 25% compare to your generic message. Before You Start¶ Here're some things you need to know before personalized your content. - Get familiar with Matrix Object - To include matrix object in your content, type # in title or message box and system will show available variables in drop down list. - Matrix object won't work for notification image and redirection URL. Personalization Examples¶ Simple¶ Let's say, you want to send a notification with username and a discount offer on current visit product page. Personalized message similar to below one: Here, if user → name object is exists in matrix variable, then it will substitute username with its value otherwise replaced with "John" as default value. Same will be apply for product name as well. Conditional¶ Conditional personalization allows you to insert dynamic content into the message. When your notification send that uses conditional blocks, we'll substitute its content with the matrix object value who match the conditions. For e.g. you want to send notification below condition: - If user is logged in then utilize username. - Show product name, if category is and having as product sku. For above schenario your content look like below one: Condition 1: If user is logged in, return username otherwise return static string "there" Condition 2: Return product name having "t-shirt" sku, when the page is category listing and category name is "apparels". Warning If category don't have "t-shirt" sku product, then it will return product name of first item. You can create complex conditions as per your needed to retrieve your preferable matrix object value. Advantages of personalizing web push:¶ - Maximize conversion rate - Retain more users - Open rates for personalized message is 3x better than generic message. Whatever content you create as push message, it needs to be engaging. Here is some tips to improve your content strategy - Craft your message informative - Keep message short and relevant - Make your notification attention-getting
https://docs.frontuser.com/webpush/personalized/
2018-12-10T02:45:57
CC-MAIN-2018-51
1544376823236.2
[]
docs.frontuser.com
Setting up the Kubos Windows Development Environment¶ What is the Kubos Windows Development Environment?¶ The Kubos Windows Development Environment is a way to edit files on the SDK through an IDE. Since Windows does not support symlinks, editing the files can be a pain, as they are only accessible to tools within the SDK such as vim or nano. This guide walks through a single method to edit those files through an IDE on the host machine, rather than through these command line tools. Note Before proceeding, please make sure you have installed the SDK. How does it work?¶ The environment is set up to treat the SDK like a remote machine, and uses an automatic FTP plug-in to allow the user to view and edit files on the SDK as if they were being edited locally. The chosen environment consists of: - Notepad++ - NppFTP Plugin This same method can be used with many common IDEs that have FTP packages for working on remote servers. Installation¶ Install Notepad++ here. Unless you know what you’re doing and want to use something else, choose the first option of the installer: “Notepad++ Installer 32-bit x86”. Choose all the default options in the installer (unless, as it states, you know what you’re doing). Install the NppFTP plugin using the Plugin Manager. - Go to “Plugins” -> “Plugin Manager” -> “Show Plugin Manager” - Under “Available”, find “NppFTP”. Click the box next to it to select it, then select “Install”. Note It might prompt you to update the Plugin Manager before installing. I would recommend doing this once. It will require a restart of Notepad++, and you will have to repeat all the steps. If it prompts again after the first time, select “No” and it should install normally. - After Notepad++ has restarted, you should now see “NppFTP” as one of the options under “Plugins”. Setup¶ Find the Vagrant configuration parameters¶ Go to the install location of the Kubos SDK and bring up your Vagrant. As it initializes, it will output its configuration: $ vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Checking if box 'kubos/kubos-dev' is up to date... ==> default: A newer version of the box 'kubos/kubos-dev' is available! You currently ==> default: have version '0.2.3'. The latest is version '1.0.1'. Run ==> default: `vagrant box update` to update. ==> default: Clearing any previously set forwarded ports... ==>: Machine booted and ready! ==> default: Checking for guest additions in VM... ==> default: Mounting shared folders... default: /vagrant => C:/Users/jacof/Documents/git/kubos default: /vagrant_data => C:/Users/jacof/Documents/git/kubos ==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision` ==> default: flag to force provisioning. Provisioners marked to run always will still run. Record the SSH address (127.0.0.1:2222) and the SSH username (vagrant). If the VM is already up, you can also issue vagrant ssh-config to get the hostname and port info. Note If you update your Vagrant box, this information could change. Configure NppFTP to access the SDK¶ - Go to “Plugins” -> “NppFTP” -> “Show NppFTP Window”. This should bring up the NppFTP windown on the right side. - In the NppFTP window, go to “Settings” (the gear) -> “Profile Settings” - Select “Add New” in the bottom left, and name it “Kubos SDK”. - Edit the settings to match the picture below. You’ll need to input: - Hostname and Port from the SSH address recorded previously - Username: “vagrant” - Password: “vagrant” - Initial remote directory: “/home/vagrant/” - Connection type: SFTP Usage¶ Connect to the Vagrant box by selecting “(Dis)Connect” -> “Kubos SDK”. This should automatically pull up the file system of the Vagrant with the /home/vagrant directory open. It should say “NppFTP - Connected to Kubos SDK” at the top of the NppFTP window. Now you can open and edit files! Double clicking on a file in the file tree will open it locally. If you make changes to any file, it will automatically tranfer the file over and replace it on the host machine whenever you hit save. Allowing UDP Communication¶ There are certain scenarios where the SDK needs to be able to receive UDP packets from an OBC when connected via a local ethernet port. For example, when using the file transfer client. In this case, Windows Firewall may need to be updated to allow this traffic. - Open ‘Windows Firewall with Advanced Security’. You can find this program by opening the start menu and searching for “firewall” - Click on “Inbound Rules”, then scroll down to the “VBoxHeadless” rules. Find the rule which blocks UDP traffic on Public networks. - Right-click on the rule and select “Disable Rule” - Right-click on “Inbound Rules” and select “New Rule” - Select “Custom” for the type of rule - Select “All programs” - Select “UDP” as the protocol type. Leave the “Local port” and “Remote port” settings as “All Ports” - Under “Which remote IP addresses does this rule apply to?”, click “These IP addresses”, then click “Add” - In the “This IP address or subnet” field, add the IP address of your OBC, then click “OK”, then click “Next” - Select “Allow the connection” - In the “When does this rule apply?” menu, leave all checkboxes selected - In the “Name” field, enter something descriptive for the rule. For example, “Allow UDP from OBC”. Then click “Finish” to finalize and activate the new rule.
https://docs.kubos.com/1.8.0/sdk-docs/windows-dev-environment.html
2018-12-10T02:28:26
CC-MAIN-2018-51
1544376823236.2
[array(['../_images/windows_firewall.png', '../_images/windows_firewall.png'], dtype=object) array(['../_images/vbox_firewall_rule.png', '../_images/vbox_firewall_rule.png'], dtype=object) array(['../_images/vbox_firewall_rule_disable.png', '../_images/vbox_firewall_rule_disable.png'], dtype=object) array(['../_images/inbound_new_rule.png', '../_images/inbound_new_rule.png'], dtype=object) array(['../_images/inbound_rule_custom.png', '../_images/inbound_rule_custom.png'], dtype=object) array(['../_images/inbound_rule_programs.png', '../_images/inbound_rule_programs.png'], dtype=object) array(['../_images/inbound_rule_ports.png', '../_images/inbound_rule_ports.png'], dtype=object) array(['../_images/inbound_rule_ip.png', '../_images/inbound_rule_ip.png'], dtype=object) array(['../_images/inbound_rule_new_ip.png', '../_images/inbound_rule_new_ip.png'], dtype=object) array(['../_images/inbound_rule_connection.png', '../_images/inbound_rule_connection.png'], dtype=object) array(['../_images/inbound_rule_network.png', '../_images/inbound_rule_network.png'], dtype=object) array(['../_images/inbound_rule_name.png', '../_images/inbound_rule_name.png'], dtype=object)]
docs.kubos.com
- Security > - Enable Authentication for an Ops Manager Project > - Enable Username and Password Authentication for your Ops Manager Project Enable Username and Password Authentication for your Ops Manager Project¶ On this page Overview¶ SSL settings for your project, first unmanage any MongoDB deployments that Ops Manager manages in your project. Procedure¶ This procedure describes how to configure and enable username and password authentication when using Automation. If your Monitoring or Backup agents are not managed by Ops Manager, you must manually configure them to use Usernames and Passwords. See Configure Monitoring Agent for Authentication and Configure Backup Agent for Authentication for instructions. Note If you configure the Ops Manager application to authenticate using SCRAM-SHA-256, you cannot deploy pre-4.0 MongoDB clusters. Configure SSL if desired.¶ - Toggle the Enable SSL slider to Yes. - Click Next. Note See Enable SSL for a Deployment for SSL setup instructions. SSL is not required for use with Username/Password (MONGODB-CR/SCRAM-SHA-1) or authentication. Configure Username/Password (MONGODB-CR/SCRAM-SHA-1) or Username/Password (SCRAM-SHA-256) for the Agents.¶ You can enable more than one authentication mechanism for your MongoDB deployment, but the Ops Manager Agents can only use one authentication mechanism. Select Username/Password (MONGODB-CR/SCRAM-SHA-1) to connect to your MongoDB deployment. Check Username/Password (MONGODB-CR/SCRAM-SHA-1) and/or Username/Password (SCRAM-SHA-256) from Agent Auth Mechanism.. Click Save. Create MongoDB Roles for LDAP Groups. (Optional)¶ After enabling LDAP Authorization, you need to create custom MongoDB roles for each LDAP Group you specified for LDAP Authorization.
https://docs.opsmanager.mongodb.com/current/tutorial/enable-mongodbcr-authentication-for-group/
2018-12-10T02:23:09
CC-MAIN-2018-51
1544376823236.2
[]
docs.opsmanager.mongodb.com
Setting up an ETJump server Here you will find simple instructions on how to set up ETJump server on windows and linux machines. Setting up a server on Windows Setting up an ETJump server on Windows is simple. Start by downloading latest version of ETJump from. After you've downloaded ETJump zip, create a directory in the ET installation root and name it etjump. If you call it something else, the server will force everyone to download the mod files, even if they have them in their own etjump directory. Unpack zip files in the etjump directory. Below is a picture of the directory structure you should get. Now you can start the server by running the following command ETDED.exe +set fs_game etjump +map oasis. You can either run the command from the command line or you can create a shortcut to the ETDED.exe and add +set fs_game "etjump" +map oasis as additional parameters. If you wish to play on the server with the same ET installation as the server is, you must first unpack the etjump .pk3 file into the same directory. To do this, open etjump .pk3 with any software that can open .zip files and extract the contents to the same directory. After you've done this, the directory should look like the following. Setting up a server on Linux Setting up a server on Linux is simple. Download the latest version of ETJump from. Once you're done, create etjump directory in your ET installation directory. Unzip the contents of the downloaded zip to that directory. Start ETDED and set mod to etjump with the +set fs_game etjump parameter.
https://etjump.readthedocs.io/en/latest/server/server_setup/
2018-12-10T03:23:10
CC-MAIN-2018-51
1544376823236.2
[array(['../../img/directory-structure-windows.png', 'alt text'], dtype=object) array(['../../img/directory-structure-windows-2.png', 'alt text'], dtype=object) ]
etjump.readthedocs.io
Add content, such as trust marks, logos, etc, above or below the product content on a product (PROD) page. This section will only appear if you have enabled the UPS Developer Kit Module. Use this section to restrict payment methods at the product level. In our example, we've already installed the Authorize.Net Payment Module and selected the Authorize.Net American Express, Visa, and MasterCard. For related examples, please see: For a bird's eye view see: To Configure Payment Modules and Methods. Shipping rules can be set in two places: Please also see Miva Merchant Images - A Brief History.
https://docs.miva.com/reference-guide/products-create-new-product
2018-01-16T11:28:29
CC-MAIN-2018-05
1516084886416.17
[]
docs.miva.com
The Cascade What is the Cascade? Statamic’s unique data structure gives you the ability to set and override data, variables and settings alike, as you traverse deeper into your file structure. To best illustrate the Cascade, we’ll use the example of a blog post that contains a list of related posts. Here we can see a number of scopes, outlined beautifully with multicolored lines. Firstly, the outermost scope (blue) will contain your Global variables. Then heading inwards (green), we arrive at the page scope. This level will contain variables associated with data at the current URL. In our example, we’re on the “My First Day” entry’s URL. This means that all the variables inside that entry will be available. Need the title? Use {{ title }}. There’s no need to use something like the {{ get_content }} tag to retrieve the data manually. This is also the first time you may see the cascade at work. Let’s say there’s a global variable named overview. If your entry also contains an overview variable, the global will be overridden. Depending on the situation, this could be beneficial or frustrating. We’ll learn how to combat this later. Moving further inwards, we have two Tags being used. A Relate tag for taxonomy terms (the terms being represented by purple outlines), and a Collection tag (entries in red). Both of these would follow the same rules of the Cascade. Within either of these loops, their variables would override the parent. For example, the entries (red) display their titles using {{ title }}. Inside the first red block, {{ title }} would output Speeder Bikes, Wookies, and Ewoks, where outside that loop, {{ title }} would output My First Day. Fighting the Cascade More often than not, the Cascade is a wonderful thing. However, sometimes it hinders us. The most classic example is dealing with missing fields when inside a loop. In our example, the page contains a tags field for showing taxonomy terms. If we were to try to use the tags field in the collection loop (red), and one of the entries didn’t have a tags field, it would actually use the tags from the page scope. To ensure that the collection tag only ever uses variables from the entry in the current iteration of the loop, we can explicitly “scope” the variables. {{ collection:blog {{ post:title }} </div> {{ /collection:blog }} The other popular example of fighting against the cascade is when you want to access a parent variable. For example, when you’re in a collection loop and you’d like to get a value in the page scope. Well, variables in the page scope are actually aliased into the page variable. So at any point, you can access page level variables by doing {{ page:variable_name }}. <h1>Our trip to {{ title }}</h1> {{ collection:gallery location: {{ /collection:gallery }} Here we have a page where the title is where we went on vacation last summer. Italy sounds good. Then we loop over any gallery entries where the location is in Italy. Within each image we want the alt tag to say what happened and where. The gallery entry might have its own title of the event ( Visiting the Colosseum) and then append in Italy. Reserved variables To help in wrangling the cascade, we’ve aliased variables into other places. The main example is the page array. If you were to create a variable named page, that would override the page scope. Don’t do that. Here’s a list of words we recommend that you don’t use as field/variable names. - page - global / globals
https://docs.statamic.com/cascade
2018-01-16T11:34:51
CC-MAIN-2018-05
1516084886416.17
[array(['/assets/examples/cascade-post.jpg', None], dtype=object)]
docs.statamic.com
Troubleshoot SDN Applies To: Windows Server (Semi-Annual Channel), Windows Server 2016 The topics in this section provide information about troubleshooting the Software Defined Networking (SDN) technologies that are included in Windows Server 2016. Note For additional Software Defined Networking documentation, you can use the following library sections. This section contains the following topics. - Troubleshoot the Windows Server Software Defined Networking Stack - Blog post Troubleshoot Configuring SDN RAS Gateway VPN Bandwidth Settings in Virtual Machine Manager - Blog post SDN Troubleshooting: Find the Local SDN RAS Gateway Server IP Address - Blog post SDN Troubleshooting: UDP Communication and Changing Network Controller Cert
https://docs.microsoft.com/en-us/windows-server/networking/sdn/troubleshoot/troubleshoot-software-defined-networking
2018-01-16T11:36:10
CC-MAIN-2018-05
1516084886416.17
[]
docs.microsoft.com
Getting Your AWS Access Keys After you've signed up for Amazon SES, you'll need to obtain your AWS access keys if you want to access Amazon SES through the Amazon SES API, whether by the Query (HTTPS) interface directly or indirectly through an AWS SDK, the AWS Command Line Interface, or the AWS Tools for Windows PowerShell. AWS access keys consist of an access key ID and a secret access key. For information about getting your AWS access keys, see AWS Security Credentials in the AWS General Reference.
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/get-aws-keys.html
2019-03-18T17:56:30
CC-MAIN-2019-13
1552912201521.60
[]
docs.aws.amazon.com
Information and support for Azure Information Protection Applies to: Azure Information Protection, Office 365 Use the following resources to help you learn about, deploy, and support Azure Information Protection for your organization. Information about new releases and updates The Azure Information Protection product team posts announcements about major new releases on the Enterprise Mobility + Security blog. Smaller releases are announced on the Azure Information Protection Yammer site, and you might also find it useful to check the UserVoice site for the status of requested features. You'll find additional and more detailed information on the Azure Information Protection technical blog. For example, a summary of documentation changes is published each month to let you know about information for any new releases, updates to support statements, and also corrections and clarifications for existing releases. These doc updates posts are titled: "Azure Information Protection Documentation Update for <month year>". Support options and community resources The following sections provide information about support and troubleshooting options, and community resources. To contact Microsoft Support If you have Premier Support, visit the portal for Premier Support customers to submit incidents, browse solutions, and get help.: Microsoft Ignite 2018 sessions for Azure Information Protection: Microsoft Virtual Academy sessions that include Azure Information Protection. Troubleshooting: If you have a question about how something works: Check whether your question is already answered on the Frequently asked questions page. If you have a question about a support statement for Azure Information Protection: See the Requirements information, which is regularly updated. for Windows: See the Installation checks and troubleshooting section from the administrator guide, and check that you're using a supported, or the TechNet forum for Microsoft RMS (Cloud). Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/azure/information-protection/information-support
2019-03-18T18:47:02
CC-MAIN-2019-13
1552912201521.60
[]
docs.microsoft.com
Contents IT Business Management Previous Topic Next Topic Define a sibling rollup relationship Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Define a sibling rollup relationship Define a relationship to roll up amounts to accounts in the sibling segments. You can roll up the expenses to any account in the hierarchy, not restricting to the immediate parent or grandparent in the hierarchy. Before you beginRole required: cost_transparency_admin or cost_transparency_analyst Procedure Navigate to Financial Modeling > Cost Models > All. Click the Sibling Rollup Relationship tab. Click New. On the form, fill in the
https://docs.servicenow.com/bundle/london-it-business-management/page/product/it-finance/task/define-sibling-rollup-relationship.html
2019-03-18T18:24:40
CC-MAIN-2019-13
1552912201521.60
[]
docs.servicenow.com
The Turbo 360 platform is organized into three broad areas: Begin by installing the necessary libraries globally to run the Turbo build commands: $.25.47 Vertex projects are the default format for scaffolds. To create a new project: $ turbo new <PROJECT_NAME> Next, change directory into the project and install dependencies: $ cd <PROJECT_NAME> $ npm install Create a project in your Turbo dashboard then copy the APP_ID from the upper right corner of the admin console: Back in the terminal, connect your local source code the project on Turbo: $ turbo app <APP_ID> Run the devsever then navigate to and you should see the following screen:then navigate to and you should see the following screen: $ turbo devserver The Vertex. Vertex also comes with a full React/Redux template out of the box. To create a React/Redux project, simply add the --react flag when scaffolding: The React and Redux source code is located under theThe React and Redux source code is located under the $ turbo new <PROJECT_NAME> --react /srcdirectory in the root level of the project. A webpack config file is also provided in the root level and the index.mustache file under the /viewsdirectory is where the compiled React source is mounted. Vertex projects come with one-command theme integration. To view currently available themes: $ turbo themes $ turbo theme <THEME_NAME> Below is what the project should look like after executing the above commands:Below is what the project should look like after executing the above commands: $ turbo theme hyperspace $ turbo devserver Turbo 360 Vectors are stand-alone units of functionality that can be deployed on the Turbo platform. Vectors are completely de-coupled from any web service and therefore can be re-used in multiple projects without re-deploying. As such, Vectors are best suited for broad areas of functionality that are common to many apps, such as email notification or web-scraping. Vectors are created using the Vector project scaffold option: $ turbo new <VECTOR_GROUP_NAME> --vector $ cd <VECTOR_GROUP_NAME> $ npm install $ turbo app <APP_ID> The Javascript Vectors are found the '..vectors/js/index.js' file: Each Vector has a unique name and takes two arguments: req and res. The 'req' argument is the http request that triggers the function and has the following attributes: Vector payloads are sent back in the 'res' argument. This typically a JSON object containing a data payload but can also be raw data types such as strings and numbers. The 'res' argument has the following methods: Including NPM modules with your Vectors is done by running an install command from the root directory of your project: IMPORTANT: Even if you already installed the module via NPM, this command is still necessary to include with your function.IMPORTANT: Even if you already installed the module via NPM, this command is still necessary to include with your function. $ turbo install <MODULE_NAME> To deploy your Javascript Vectors, from the root directory simply type: Once the deployment completes, Turbo will provide an HTTP endpoint for your Vectors which typically looks like this:Once the deployment completes, Turbo will provide an HTTP endpoint for your Vectors which typically looks like this: $ turbo vectors The following example queries the Google Maps API with a given address and returns only the latitude/longitude coordinates. The Google Maps API returns a large amount of data when querying for address coordinates and this function "extracts" only the latitude and longitude data for a simpler payload:..vectors/js/index.js Turbo Vectors can also be exectued via a Python runtime environment. This can be found in the '..vectors/py/index.js' file: Each Vector has a unique name and takes an event argument. The 'event' argument is the http request that triggers the function and has the following attributes: Responses are send in the return object which is typically a JSON object. The following example sends an SMS message using the Twilio API. It assumes Python 3.6 runtime environment:..vectors/py/app.py The Turbo SDK provides a core set of functionality out-of-the-box which tremendously reduces the amount of time required to create, configure and deploy a full stack application. Though Turbo projects (Vertex and Vectors) are not required to use the SDK, they are designed to work together in a complementary fashion. The following areas of functionality are provided by the Turbo SDK: If you have any questions, comments or feedback, feel free to contact us at [email protected]
https://docs.turbo360.co/
2019-03-18T17:23:07
CC-MAIN-2019-13
1552912201521.60
[]
docs.turbo360.co
4.2.2.2.1 SubActivity When inside an activity, a parameter type is itself an activity and is a sub-activity of the main activity. This nested approach means we can create trees of activities, where complex process can be captured as an aggregation of smaller activities. In this case, the AllowedKind for the parameter is subActivity.
http://docs.energistics.org/CTA/CTA_TOPICS/CTA-000-045-0-C-sv2100.html
2019-03-18T17:28:04
CC-MAIN-2019-13
1552912201521.60
[]
docs.energistics.org
The last item in the list touched either in the UI or via a list related command Member of Tree (PRIM_TREE) Data Type - PRIM_TREE.TreeItem - Reference to the Tree containing the item CurrentItem is the item in the list that was last touched either by the mouse, keyboard or a LANSA list command such as SELECTLIST or GET_ENTRY. When CurrentItem is set, the field values associated with the list definition will be returned to the component. This ensures that the field values for the last item are correct when the event is fired. Care is required when relying on CurrentItem to identify the item last clicked. This will actually be the FocusItem and it won't change until another item is clicked. CurrentItem however can be affected by a simple MouseOver. Currentitem is best suited to processing directly related the list e.g. DoubleClick events or SELECTLIST. When dealing with actions driven by external sources, for example a button click to process the last item, using FocusItem is recommended. Using the FOR command to iterate over the items in a list will not affect CurrentItem. Using CurrentItem to set the image for the last item added to a list. Add_Entry To_List(#List) #List.CurrentItem.Image <= #Image1 Using CurrentItem in a SELECTLIST loop to check item state. Here the selected items in the list are being totalled. SelectList Named(#List) Continue (*Not #Item.Selected) #TotalSalary += #Salary EndSelect Using the FOR command iterates over the items in their sorted sequence, but CurrentItem will not be set. Use GET_ENTRY to set the CurrentItem to ensure field values are returned to the component. For Each(#Item) in(#List.Items) Get_Entry Number(#Item.Entry) From_List(#List) * User processing here EndFor All Component Classes Technical Reference Febuary 18 V14SP2
https://docs.lansa.com/14/en/lansa016/prim_tree_currentitem.htm
2019-03-18T18:12:55
CC-MAIN-2019-13
1552912201521.60
[]
docs.lansa.com
All public logs From UABgrid Documentation Combined display of all available logs of UABgrid Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 13:56, 26 January 2011 [email protected] (Talk | contribs) moved page Gromacs Benchmark to Gromacs (This page will be an introduction and overview to the application. Its focus should be on Gromacs in general with sub pages linking to more specific content. For an example see the NIH's Biowulf page on Gromacs) - 13:51, 26 January 2011 [email protected] (Talk | contribs) marked revision 2400 of page Gromacs Benchmark patrolled
https://docs.uabgrid.uab.edu/tgw/index.php?title=Special:Log&page=Gromacs+Benchmark
2019-03-18T18:12:14
CC-MAIN-2019-13
1552912201521.60
[]
docs.uabgrid.uab.edu
The CS Time Importing utility is a dynamic tool that is used to assist with the importing of clockings and employee information from ASCII text files that are created by other clocking and employee management systems. These data files can be manually imported or a device can be created to import newly created files automatically. An example file with the current generic import formats is available under Files on the bottom of this page. Permalink: Viewing Details:
http://docs.cstime.com/Importing_Data
2019-03-18T17:53:41
CC-MAIN-2019-13
1552912201521.60
[]
docs.cstime.com
filter.csv-to-xml.has-header (collection.cfg setting) Description Controls if the CSV file has a header or not. If set true the produced XML will have values in elements that are named after the field for example the CSV file: year,model 1999,Foo Would be transformed into XML containing: <year>1999</year> <model>Foo</model> If no header is defined (and no custom header is defined) the field number will be used for example: <field_0>1999</field_0> <field_1>Foo</field_1> Default value By default a CSV file is assumed to not have a header. filter.csv-to-xml.has-header=false Examples If the CSV file has a header filter.csv-to-xml.has-header=true
https://docs.funnelback.com/develop/programming-options/document-filtering/filter_csv_to_xml_has_header_collection_cfg.html
2019-03-18T17:41:50
CC-MAIN-2019-13
1552912201521.60
[]
docs.funnelback.com
Framework Element Framework Element Framework Element Framework Element Class Definition Provides a base element class for Windows Runtime UI objects. FrameworkElement defines common API that support UI interaction and the automatic layout system. FrameworkElement also defines API related to data binding, defining and examining the object tree, and tracking object lifetime. struct winrt::Windows::UI::Xaml: Inherits UIElement Implements - Inheritance - FrameworkElementFrameworkElementFrameworkElementFrameworkElement - Attributes - Windows 10 requirements Remarks FrameworkElement is a base element: it's a class that many other Windows Runtime classes inherit from in order to support the XAML UI element model. Properties, methods and events that FrameworkElement defines are inherited by hundreds of other Windows Runtime classes. Many common XAML UI classes derive from FrameworkElement, either directly or through intermediate base classes such as Panel or Control. Typically, you don't derive classes directly from FrameworkElement, because certain expected services for a class that is intended for a UI representation (such as template support) are not fully implemented there. More commonly used base classes for derived custom classes are: - Specific controls that are not sealed (for example, TextBox ). - Control base classes (Control, ContentControl, UserControl ). - Navigation elements (Page, Frame ). - Panel classes (the base class Panel, or specific non-sealed implementations such as Grid ). FrameworkElement API and features FrameworkElement extends UIElement, which is another base element, and adds support for various Windows Runtime feature areas. Layout The layout system recognizes all objects that derive from FrameworkElement to be elements that potentially participate in layout and should have a display area in the app UI. The layout system reads various properties that are defined at FrameworkElement level, such as MinWidth. Most UI elements use the FrameworkElement -defined Width and Height for their basic sizing information. FrameworkElement provides extensible methods for specialized layout behavior that panels and controls with content can override in their class implementations. For more info, see Define layouts with XAML. Prominent API of FrameworkElement that support layout: Height, Width, ActualHeight, ActualWidth, Margin, MeasureOverride, ArrangeOverride, HorizontalAlignment, VerticalAlignment, LayoutUpdated. Object lifetime events You often want to know when an object is first loaded (loaded is defined as when an object becomes attached to an object tree that connects to the root visual). FrameworkElement defines events related to object lifetime that provide useful hooks for code-behind operations. For example you need object lifetime info to add child objects to a collection or set properties on child objects just prior to use, with assurance that the necessary objects in the object tree have already been instantiated from XAML markup. For more info, see Events and routed events overview. Prominent API of FrameworkElement that support object lifetime events: Loaded, SizeChanged, Unloaded, OnApplyTemplate. Data binding The ability to set a value for a potentially inherited data context for a data binding is implemented by FrameworkElement. FrameworkElement also has API for establishing data binding in code rather than in XAML. For more info, see Data binding in depth. Prominent API of FrameworkElement that support data binding: DataContext, DataContextChanged, SetBinding, GetBindingExpression. XAML language and programming model integration Usually your app's element structure resembles the XAML markup that you defined to create the UI, but sometimes that structure changes after the XAML was parsed. FrameworkElement defines the Name property and related API, which are useful for finding elements and element relationships at run-time. For more info, see XAML namescopes. Prominent API of FrameworkElement that support XAML and programming model: Name, FindName, Parent, BaseUri, OnApplyTemplate. Globalization The FrameworkElement class defines the Language property and the FlowDirection property. For more info, see Globalizing your app. Style and theme support The FrameworkElement class defines the Style property and the RequestedTheme property. Also, the Resources property is used to define the page-level XAML resource dictionaries that typically define styles and templates, as well as other shared resources. For more info, see Styling controls and ResourceDictionary and XAML resource references. FrameworkElement dependency properties Many of the read-write properties of the Framewor. FrameworkElement derived classes FrameworkElement is the parent class for several immediately derived classes that distinguish several broad classifications of UI elements. Here are some of the notable derived classes: - Control: Control has many more derived control classes, basically all of the XAML controls that you use for a Windows Runtime UI are derived from Control. - Presenters: A presenter is a class that imparts a visual appearance, usually by contributing to some control scenario, but the presenter itself isn't typically interactive. For example: Border, ContentPresenter (parent of ScrollContentPresenter and others), ItemsPresenter, Viewbox. - Media and web elements: Image, WebView, MediaElement, CaptureElement. These display content and have some level of interactivity that happens within their content, but they aren't actually controls. - Text display elements: TextBlock, RichTextBlock, RichTextBlockOverflow, Glyphs. (Text elements like Run and Hyperlink, which often declare the content of a text display element, are not derived from FrameworkElement.) - The Panel base class: Panel is the parent class for the common panels such as Grid, StackPanel and so on. - The Shape base class: Shape is the parent class for Path, Rectangle and so on. - The IconElement base class: parent class for FontIcon, SymbolIcon and so on. - Miscellaneous UI elements: Popup, TickBar, Viewbox. Constructors Properties Methods Events See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.FrameworkElement
2019-03-18T17:28:23
CC-MAIN-2019-13
1552912201521.60
[]
docs.microsoft.com
This is an important announcement from Never Settle – the developers of WooCommerce Amazon Fulfillment: It has been an amazing 4 years providing what was originally the only integration between WooCommerce and Amazon Fulfillment, which we initially built for our own Osana project. Back then it was called “NS FBA for WooCommerce.” Four years is an eternity in technology terms and big things are shifting – Amazon is no longer allowing individual sellers to automatically obtain their own developer credentials through which to access Amazon’s Marketplace Web Service (MWS) (the technology that facilitates integration between software like WooCommerce and FBA). Amazon’s position is that MWS developer credentials cannot be used in software connecting to FBA which is not directly coded by the seller their-self based on security and accountability concerns. We’ve expressed our difference of opinion that the open source ecosystem in which WooCommerce thrives more than satisfies those concerns as well as the Amazon requirements for MWS credentials by giving total transparency and ownership for sellers over 100% of the code in plugins like WooCommerce Amazon Fulfillment. Despite this, Amazon has held firm to their position and we are respectfully pursuing all available options to ensure ongoing, compliant integration with FBA through MWS. In the meantime here’s what this means for you: Amazon has asked us – as an initial step – to have all existing users of WooCommerce Amazon Fulfillment authorize our Never Settle Developer ID. This will also be a required step of any future authentication mechanism that the WooCommerce Amazon Fulfillment plugin uses to communicate with FBA through MWS. It’s a very easy and simple process: - Log into your Seller Central account and go to your Manage Apps area () - Click the big “Authorize new developer” button near the top of the page. - In the Developer’s Name field enter: Never Settle - In the Developer ID field enter: 4464-5354-5547 - Click Next - Read, understand, and confirm your agreement with Amazon’s required conditions for granting Developer access - Click Next - On the confirmation page, copy your information and save it in a safe place. You can also access this information in the authorization details under your Seller Central > Manage Apps area, and you will need those values for Seller ID, Marketplace ID, and MWS Auth Token to plug into a future version of WooCommerce Amazon Fulfillment. (Note: no further action is needed at this time after that last step!) Special Caveat for sellers outside North America: One of the casualties in these changes is that our Developer ID is only valid for the North America region. Every Developer ID Amazon issues can only be tied to 1 Region. All sellers outside this region should still try to authorize our developer ID, but in the future this authorization will not be valid for integrations between WooCommerce and FBA in those regions outside North America. We are working with Amazon to try to find the fastest solution for this issue, and Amazon is aware of the pain involved with this. It is technically possible for us to get unique Developer IDs for all Amazon Regions and offer the future version worldwide. But the current requirements for us to obtain Developer IDs and maintain solutions for every Region are financially and operationally impossible. We would have to: - Open and pay for a unique Amazon Pro Seller account in each Region. - To do that we’d first have to open bank accounts with local financial institutions in each Region. - To do that we’d first have to create legal entities and tax IDs in each Region. - We’d also need to pay for FBA Services for each Seller account in each Region and maintain SKUs and at least some separate physical inventory in each Region to test, support, and update code related to scenarios that are impacted by different scenarios and conditions that can happen per Region. It’s also important for you to be aware that Amazon may decide to start deactivating existing developer credentials if they cannot identify who is using them. If this happens you will be unable to connect to your FBA data from WooCommerce Amazon Fulfillment even if your developer credentials were previously valid. We’re doing everything in our power to prevent this scenario and the best way you can also help is by authorizing our Developer ID in your Seller Central account by following the instructions above. Thank you for your patience during this unexpected challenge! If you have recently purchased WooCommerce Amazon Fulfillment and are still unable to connect, we’d be happy to figure out the best resolution possible including refund. We’re especially sensitive to the fact that some of you have been in a holding pattern for weeks over this issue. Please know that we’re moving as quickly as possible to get a new solution in place! As always, if you have any questions about this whatsoever, we’re happy to help – please reach out to us at [email protected]
https://docs.woocommerce.com/document/woocommerce-amazon-fulfillment/important-update-on-woocommerce-amazon-fulfillment/?aff=3069
2019-03-18T18:03:26
CC-MAIN-2019-13
1552912201521.60
[]
docs.woocommerce.com
RBVersion From Xojo Documentation Reports the major and minor version number of Xojo. Usage result=RBVersion Notes Version 20xx returns a number of the form 20xx.yy, where yy is the Release number. When you build your application, you can enter version information about your application in the App class’s Properties pane. This information is stored in your application's 'vers' resource. For more information, see the chapter on building applications in the User's Guide. You can use RBVersion in an #if statement to determine whether a use is running a particular version of Xojo. Based on the result, you conditionally compile code that is available only for that version. Sample Code The following line displays the version of Xojo being used: The following includes code only for applications built using Xojo version 2013 Release 1 or above. See Also DebugBuild, XojoVersion, XojoVersionString, TargetBigEndian, TargetCocoa, TargetDesktop, TargetLinux, TargetLittleEndian, TargetMachO, TargetMacOS, TargetWindows, TargetX86 constants; #If...#Endif statement.
http://docs.xojo.com/RBVersion
2019-03-18T18:32:00
CC-MAIN-2019-13
1552912201521.60
[]
docs.xojo.com
In This Issue CNIC 2013: Exploring All the Possibilities At SNA's Child Nutrition Industry Conference, operators and industry demonstrated how they rise to meet every challenge: by working together. [Read Full Story] USDA Releases Memo on Paid Lunch Equity Earlier this month, USDA released a memo addressing Paid Lunch Equity. [Read Full Story] School Chefs Team Up for Culinary Competition at CNIC The Culinary Institute of America in San Antonio hosted a "Chopped"-style cooking competition for SNA members at the 2013 Child Nutrition Industry Conference. [Read Full Story] Jones Dairy Farm All-Natural Turkey Sausage So tasty, kids will never know they're low in fat, low in calories and low in sodium. Available in links and patties. Try some today. ] Don't Miss Out on LAC 2013! Spring into action! SNA's Legislative Action Conference (LAC), March 3 to 6, will be here before you know it. Register today and make plans to join hundreds of attendees in our nation's capital. [Read Full Story] Food Trend Predictions for 2013 Now that we're several weeks into 2013, let's take a look at some of the culinary trends to watch for this year as predicted by the experts. [Read Full Story] Josephine Martin National Policy Fellow Applications Available Will you be the first Josephine Martin National Policy Fellow at SNA's Legislative Action Conference this year? Hurry, the January 25th application deadline is fast approaching! [Read Full Story] Help Schools Score a $1000 Breakfast Grant with the "got milk?®" Breakfast Blitz Fuel Up to Play 60 and the National Milk Mustache "got milk?" Campaign have teamed up to help bring breakfast to more kids across the country. [Read Full Story] Events March 3-6, 2013 Legislative Action Conference March 4-8, 2013 National School Breakfast Week May 15-16, 2013 Spring Industry Boot Camp July 14-17, 2013 Annual National Conference [ More Meetings & Events] Exams 3/2 - Washington, D.C. 3/18 - Emporia, KS 3/22 - Virginia Beach, VA 4/18 - Savannah, GA 4/27 - West Columbia, SC 6/16 - Houston, TX 6/18 - Greensboro, NC [ More Credentialing Exams] Your Subscription Unsubscribe Update E-mail Address Subscribe [[tracking_beacon]]
http://docs.schoolnutrition.org/newsroom/cndirect/v12/cndirectv12n2.html
2013-12-05T04:41:48
CC-MAIN-2013-48
1386163040002
[]
docs.schoolnutrition.org
public abstract class AbstractFallbackTransactionAttributeSource extends java.lang.Object implements TransactionAttributeSource TransactionAttributeSourcethat caches attributes for methods and implements a fallback policy: 1. specific target method; 2. target class; 3. declaring method; 4. declaring class/interface. Defaults to using the target class's transaction attribute if none is associated with the target method. Any transaction attribute associated with the target method completely overrides a class transaction attribute. If none found on the target class, the interface that the invoked method has been called through (in case of a JDK proxy) will be checked. This implementation caches attributes by method after they are first used. If it is ever desirable to allow dynamic changing of transaction attributes (which is very unlikely), caching could be made configurable. Caching is desirable because of the cost of evaluating rollback rules.TransactionAttributeSource() public TransactionAttribute getTransactionAttribute(java.lang.reflect.Method method, java.lang.Class<?> targetClass) Defaults to the class's transaction attribute if no method attribute is found. getTransactionAttributein interface TransactionAttributeSource method- the method for the current invocation (never null) targetClass- the target class for this invocation (may be null) nullif the method is not transactional TransactionAttribute findTransactionAttribute(java.lang.reflect.Method method) method- the method to retrieve the attribute for nullif none) protected abstract TransactionAttribute findTransactionAttribute(java.lang.Class<?> clazz) clazz- the class to retrieve the attribute for nullif none) protected boolean allowPublicMethodsOnly() The default implementation returns false.
http://docs.spring.io/spring-framework/docs/3.2.0.RC2/api/org/springframework/transaction/interceptor/AbstractFallbackTransactionAttributeSource.html
2013-12-05T04:42:45
CC-MAIN-2013-48
1386163040002
[]
docs.spring.io
Declaration of the OpenCL Resources micro-service. More... #include <mitkOclResourceService.h> Declaration of the OpenCL Resources micro-service. The OclResourceService defines an service interface for providing access to the essential OpenCL-related variables. In addition the service can also store compiled OpenCL Programs in order to avoid multiple compiling of a single program source Definition at line 27 of file mitkOclResourceService.h. Returns a valid cl_command_queue related to the (one) OpenCL context. Returns a valid OpenCL Context (if applicable) or nullptr if none present. Referenced by mitk::OclFilter::CompileSource(), mitk::OclImage::CreateGPUImage(), and mitk::OclImage::TransferDataToGPU(). Returns the identifier of an OpenCL device related to the current context. Referenced by mitk::OclFilter::GetDeviceMemory(). Checks if an OpenCL image format passed in is supported on current device. Get the maximum size of an image. Get the cl_program by name. Insert program into the internal program storage. Remove all invalid (=do not compile) programs from the internal storage. Referenced by mitk::OclBinaryThresholdImageFilter::Update(). Puts the OpenCL Context info in std::cout. Remove given program from storage. Referenced by mitk::OclFilter::~OclFilter().
http://docs.mitk.org/nightly/classOclResourceService.html
2020-03-28T18:48:18
CC-MAIN-2020-16
1585370492125.18
[]
docs.mitk.org
Sub-Document API High performance Sub-Document API for more efficient use of network bandwidth than fetching entire. Sub-Doc form the SDK The Subdocument APIs is exposed through two separate builders which are available through two top-level Collection methods LookupIn and MutateIn. The object returned, and how to work with it, is extensively documented in our practical uide to using Sub_Doc from the SDK.
https://docs.couchbase.com/php-sdk/3.0/concept-docs/subdocument-operations.html
2020-03-28T18:56:26
CC-MAIN-2020-16
1585370492125.18
[]
docs.couchbase.com
Exclude Domains from HTTPS Filtering If you need to add exclusions from HTTPS filtering, click the UI / Squid Proxy / Exclusions add as many exclusions as desired. Please note, these exclusions work only when browsers are configured to use Squid proxy explicitly. If you have an invisible intercept proxy then these exclusions need to be managed by your firewall. The following screenshot shows default exclusions of the application. The following exclusion types are supported: - Exclusion by remote domain name - Exclusion by IP address, subnet or IP range of the remote domain - Exclusion by IP address, subnet or IP range of the proxy user - Exclusion by time schedule and browser user agent - Exclusion by assigned categories of a domain name The last type of exclusions is very useful to automatically exclude financial institutions and government sites from HTTPS inspection and SSL filtering. The following screenshot shows default excluded categories.
https://docs.diladele.com/administrator_guide_old_stable/https_filtering/https_filtering_exclusions.html
2020-03-28T16:52:37
CC-MAIN-2020-16
1585370492125.18
[array(['../../_images/https_exclusions1.png', '../../_images/https_exclusions1.png'], dtype=object) array(['../../_images/https_exclude_by_category1.png', '../../_images/https_exclude_by_category1.png'], dtype=object)]
docs.diladele.com
Contents Strategic Partner Links Sepasoft - MES Modules Cirrus Link - MQTT Modules Resources Knowledge Base Articles Inductive University Forum IA Support SDK Documentation SDK Examples All Manual Versions Ignition 8 Ignition 7.9 Ignition 7.8 With a simple script on a Button component, we can store a PDF file into the database so that any user can view it later. This part does not use a Named Query, because the Named Query Parameters do not have a data type that allows us to pass in the raw file bytes. We can instead use system.db.runPrepUpdate to call a query from the script. This example requires that you have a table with a byte array column in it. For example: MySQL user the BLOB data type and MSSQL uses the varbinary() data type. Navigate to the Script Editor Tab of the actionPerformed Event Handler. Here we can put a script that will grab the file bytes using the file path and the system.file.readFileAsBytes function, and then insert that into the database, along with a user selected file name. Ignition can render a PDF document inside the PDF Viewer component, which is a part of the Reporting Module. To view PDF files in the Client, your Ignition server must have the Reporting Module installed. Once the module is installed, you can load the bytes from the database into the PDF Viewer component. With no Parameters, add a query to select all the files in our files table. Add the query to select the file name and bytes based on the selected ID. This script will take any new selected value and use it in the Named Query we made in step 2 to get the file name and bytes. We can then load the bytes into the PDF Viewer.
https://docs.inductiveautomation.com/pages/diffpagesbyversion.action?pageId=15337284&selectedPageVersions=28&selectedPageVersions=29
2020-03-28T17:59:58
CC-MAIN-2020-16
1585370492125.18
[]
docs.inductiveautomation.com
- Security > - Encryption > - Transport Encryption > - Configure MongoDB for FIPS Configure MongoDB for FIPS¶ On this page New in version 2.6. Overview¶. the following command: Some versions of Linux periodically execute a process to prelink dynamic libraries with pre-assigned addresses. This process modifies the OpenSSL libraries, specifically libcrypto. The OpenSSL FIPS mode will subsequently fail the signature check performed upon startup to ensure libcrypto has not been modified since compilation. To configure the Linux prelink process to not prelink libcrypto: Considerations¶ FIPS is property of the encryption system and not the access control system. However, if your environment requires FIPS compliant encryption and access control, you must ensure that the access control system uses only FIPS-compliant encryption. MongoDB’s FIPS support covers the way that MongoDB uses OpenSSL for network encryption, SCRAM authentication, and x.509 authentication. If you use Kerberos or LDAP authentication, you must ensure that these external mechanisms are FIPS-compliant..
https://docs.mongodb.com/v3.4/tutorial/configure-fips/
2020-03-28T19:01:10
CC-MAIN-2020-16
1585370492125.18
[]
docs.mongodb.com
- Security > - Security Hardening > - Hardening Network Infrastructure > - Configure Linux iptablesFirewall for MongoDB Configure Linux iptables Firewall for MongoDB¶ On this page. This document outlines basic firewall configurations for iptables firewalls on Linux. Use these approaches as a starting point for your larger networking organization. For a detailed overview of security practices and risk management for MongoDB, see Security. Overview¶ Rules in iptables configurations fall into chains, which describe the process for filtering and processing specific streams of traffic. Chains have an order, and packets must pass through earlier rules in a chain to reach later rules. This document addresses only the following two chains: INPUT - Controls all incoming traffic. OUTPUT - Controls all outgoing traffic. Given the default ports of all MongoDB processes, you must configure networking rules that permit only required communication between your application and the appropriate mongod and mongos instances. Be aware that, by default, the default policy of iptables is to allow all connections and traffic unless explicitly disabled. The configuration changes outlined in this document will create rules that explicitly allow traffic from specific addresses and on specific ports, using a default policy that drops all traffic that is not explicitly allowed. When you have properly configured your iptables rules to allow only the traffic that you want to permit, you can Change Default Policy to DROP. Patterns¶ This section contains a number of patterns and examples for configuring iptables for use with MongoDB deployments. If you have configured different ports using the port configuration setting, you will need to modify the rules accordingly. Traffic to and from mongod Instances¶: 198.51.100.55. You can also express this using CIDR notation as 198.51.100.55/32. If you want to permit a larger block of possible IP addresses you can allow traffic from a /24 using one of the following specifications for the <ip-address>, as follows: Traffic to and from mongos Instances¶ mongos instances provide query routing for sharded clusters. Clients connect to mongos instances, which behave from the client’s perspective as mongod instances. In turn, the mongos connects to all mongod instances that are components of the sharded cluster. Use the same iptables command to allow traffic to and from these instances as you would from the mongod instances that are members of the replica set. Take the configuration outlined in the Traffic to and from mongod Instances section as an example. Traffic to and from a MongoDB Config Server¶ Config servers host the config database that stores metadata for sharded clusters. Config servers listen for connections on port 27019. As a result, add the following iptables rules to the config server to allow incoming and outgoing connection on port 27019, for connection to the other config servers. Replace <ip-address> with the address or address space of all the mongod that provide config servers. Additionally, config servers need to allow incoming connections from all of the mongos instances in the cluster and all mongod instances in the cluster. Add rules that resemble the following: Replace <ip-address> with the address of the mongos instances and the shard mongod instances. Traffic to and from a MongoDB Shard Server¶ Shard servers default to port number 27018. You must configure the following iptables rules to allow traffic to and from each shard: Replace the <ip-address> specification with the IP address of all mongod. This allows you to permit incoming and outgoing traffic between all shards including constituent replica set members, to: Furthermore, shards need to be able make outgoing connections to: Create a rule that resembles the following, and replace the <ip-address> with the address of the config servers and the mongos instances: Provide Access For Monitoring Systems¶ The mongostatdiagnostic tool, when running with the --discoverneeds to be able to reach all components of a cluster, including the config servers, the shard servers, and the mongosinstances. If your monitoring system needs access to the MongoDB HTTP interface, you must ensure the HTTP interface’s port is open. The HTTP interface listens on the portof your mongodinstance plus 1000. By default, this is port 28017. Insert the following rule to your iptableschain: Replace <ip-address>with the address of the instance that needs access to the HTTP or REST interface. For all deployments, you should restrict access to this port to only the monitoring instance. Optional For shard server instances, the rule would resemble the following: For config server instances, the rule would resemble the following: Change Default Policy to DROP¶ The default policy for iptables chains is to allow all traffic. After completing all iptables configuration changes, you must change the default policy to DROP so that all traffic that isn’t explicitly allowed as above will not be able to reach components of the MongoDB deployment. Issue the following commands to change this policy: Manage and Maintain iptables Configuration¶ This section contains a number of basic operations for managing and using iptables. There are various front end tools that automate some aspects of iptables configuration, but at the core all iptables front ends provide the same basic functionality: Make all iptables Rules Persistent¶ By default all iptables rules are only stored in memory. When your system restarts, your firewall rules will revert to their defaults. When you have tested a rule set and have guaranteed that it effectively controls traffic you can use the following operations to you should make the rule set persistent. On Red Hat Enterprise Linux, Fedora Linux, and related distributions you can issue the following command: On Debian, Ubuntu, and related distributions, you can use the following command to dump the iptables rules to the /etc/iptables.conf file: Run the following operation to restore the network rules: Place this command in your rc.local file, or in the /etc/network/if-up.d/iptables file with other similar operations. List all iptables Rules¶ To list all of currently applied iptables rules, use the following operation at the system shell. Flush all iptables Rules¶ If you make a configuration mistake when entering iptables rules or simply need to revert to the default rule set, you can use the following operation at the system shell to flush all rules: If you’ve already made your iptables rules persistent, you will need to repeat the appropriate procedure in the Make all iptables Rules Persistent section.
https://docs.mongodb.com/v3.4/tutorial/configure-linux-iptables-firewall/
2020-03-28T18:44:08
CC-MAIN-2020-16
1585370492125.18
[]
docs.mongodb.com
$ oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo (1) secret: "..." (2) redirectURIs: - "" (3) grantMethod: prompt (4) ') The authentication layer identifies the user associated with requests to the OpenShift Online API. The authorization layer then uses information about the requesting user to determine if the request should be allowed.: Requests to the OpenShift Online API are authenticated using the following methods: Obtained from the OpenShift Online. Online Online confirms that User A can impersonate the serviceaccount named name in namespace. If the check fails, the request fails with a 403 (Forbidden) error code. By default, project administrators and editors can impersonate service accounts in their namespace. The OpenShift Online master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured Online Online to validate credentials against a backing Requests to <master>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Online Online may need to discover information about the built-in OAuth server. For example, they may need to discover what the address of the <master> server is without manual configuration. To aid in this, OpenShift Online
https://docs.openshift.com/online/pro/architecture/additional_concepts/authentication.html
2020-03-28T18:33:26
CC-MAIN-2020-16
1585370492125.18
[]
docs.openshift.com
In this lesson, you will be guided through a complete GIS analysis in QGIS. Примечание Lesson developed by Linfiniti and S Motala (Cape Peninsula University of Technology): As a volunteer for Cape Nature, you have agreed to search for the plant on the closest suitable piece of land to your house. Use your GIS skills to determine where you should go to look. In order to solve this problem, you will have to use the available data (available in exercise_data/more_analysis) to find the candidate area that is closest to your house. If you don’t live in Cape Town (where this problem is based) you can choose any house in the Cape Town region. The solution will involve:. The Custom min / max values fields should now populate with 0 and 1, respectively. (If they do not, then there was a mistake with your reclassification of the data, and you will need to go over that part. Hide all layers in the Layers list.. Now you need to exclude the areas that are within 250m from the edge of the rural areas. Do this by creating a negative buffer, as explained below. are visible). Use the Save as... function in the layer’s right-click menu for this. Save the file in the Rasterprac directory. Name the file candidate_areas_only.shp. Save your map. mode again, and save your edits if prompted to do so.. You will need to find the centroids (“centers of mass”) for the solution area polygons in order to decide which is closest to your.
https://docs.qgis.org/2.8/ru/docs/training_manual/complete_analysis/analysis_exercise.html
2020-03-28T17:17:19
CC-MAIN-2020-16
1585370492125.18
[]
docs.qgis.org
infix gcd Documentation for infix gcd assembled from the following types: language documentation Operators (Operators) infix gcd multi sub infix:<gcd>(, --> Int) Coerces both arguments to Int and returns the greatest common divisor. If one of its arguments is 0, the other is returned (when both arguments are 0, the operator returns 0).
https://docs.raku.org/routine/gcd
2020-03-28T16:47:30
CC-MAIN-2020-16
1585370492125.18
[]
docs.raku.org
Getting Started This article will get you started in using the RadSpreadProcessing library. It contains the following sections: model of the RadSpreadProcessing library in your project, you need to add references to the following assemblies: Creating a Workbook The document model allows you to instantiate a new workbook and populate it with any data you want. Example 1 shows how you can create a workbook and add a new worksheet to it. Example 1: Create Workbook Workbook workbook = new Workbook(); Worksheet worksheet = workbook.Worksheets.Add(); You can then create a CellSelection and set any value to the selected cells. Example 2 shows how you can create a cell and set a string value to it. Example 2: Set value of cell CellSelection selection = worksheet.Cells[1, 1]; //B2 cell selection.SetValue("Hello RadSpreadProcessing"); Exporting The RadSpreadProcessing library supports a variety of formats to which you can export the contents of a workbook. Example 3 shows how you can export the previously created workbook to Xlsx format. Example 3: Export to Xlsx string fileName = "SampleFile.xlsx"; IWorkbookFormatProvider formatProvider = new XlsxFormatProvider(); using (Stream output = new FileStream(fileName, FileMode.Create)) { formatProvider.Export(workbook, output); } More information about the import and export functionalities of RadSpreadProcessing is available in the Formats and Conversion section. Using RadSpreadsheet RadSpreadsheet is a UI control part of the Telerik UI for WPF/Silverlight suites. The document model explained in this section of the documentation and all its features are shared between the RadSpreadProcessing library and RadSpreadsheet. This help section contains information about all UI-specific features of RadSpreadsheet.
https://docs.telerik.com/devtools/document-processing/libraries/radspreadprocessing/getting-started
2020-03-28T18:41:07
CC-MAIN-2020-16
1585370492125.18
[]
docs.telerik.com
PANIC happened on master What is this alert? When a message with severity level PANIC is written to the Greenplum Database log on the master host, Command Center raises an alert. A PANIC message reports a critical error that requires immediate attention. A panic occurs when the system encounters an unexpected state or an error condition that could not be handled in the software. All currently executing jobs in the cluster are cancelled when a PANIC occurs. Some examples of conditions that could cause a PANIC are: - Unable to access critical files - Full disk or memory buffer - Unexpected data values, such as badly formatted transaction IDs - Seg faults or null pointer references What to do Contact Pivotal Support for help before you attempt to troubleshoot the problem. Continuing after a PANIC could lead to data loss and may make it more difficult to diagnose the problem and recover your system.
http://gpcc.docs.pivotal.io/610/help/alert-master-panic.html
2020-03-28T18:04:50
CC-MAIN-2020-16
1585370492125.18
[]
gpcc.docs.pivotal.io
You don't need to collect any information from your shopper in your payments form. If you have an existing iOS Components integration, you can use our Redirect Component to redirect the shopper to the Adyen-hosted webpage where they can complete the payment. When making an BACS Direct Debit payment, you need to: Before you begin This page explains how to add BACS Direct Debit to your existing iOS Components integration. The iOS Components integration works the same way for all payment methods. If you haven't done this integration yet, refer to our Components integration guide. Before starting your BACS Direct Debit integration: - Make sure that you have set up your back end implementation for making API requests. - Add BACS Direct Debit in your test Customer Area. Show BACS Direct Debit in your payment form Include BACS Direct Debit in the list of available payment methods. You don't need to collect any information from the shopper in your payment form. - Specify in your /paymentMethods request: - countryCode: GB - amount.currency: GBP - channel: Specify iOS. The response contains paymentMethod.type: directdebit_GB. We provide logos for BACS and Direct Debit which you can use on your payment form. For more information, refer to Downloading logos. Make a payment When the shopper proceeds to pay, you need to: - From your server, make a /payments request, specifying: paymentMethod.type: Set this to directdebit_GB to redirect to the Adyen-hosted webpage. returnURL: URL to where the shopper should be redirected back to after they complete the payment. For more information on setting a custom URL scheme for your app, read the Apple Developer documentation. client app. You need this to initialize the Redirect Component. Handle the redirect Use the Redirect Component to redirect the shopper to the Adyen-hosted webpage. After the shopper returns to your app, make a POST /payments/details request from your server and provide the datafrom the didProvidemethod from your client app..
https://docs.adyen.com/payment-methods/bacs/ios-component
2020-03-28T18:22:45
CC-MAIN-2020-16
1585370492125.18
[]
docs.adyen.com