content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Mapbox Terrain-RGB v1 This tileset reference document includes information to help you use the data in the Mapbox Terrain-RGB v1 tileset. See the Mapbox Terrain-DEM raster tileset for an updated and optimized version of this tileset. Overview Mapbox Terrain-RGB is a Mapbox-provided raster tileset contains global elevation data encoded in raster PNG tiles as color values that can be decoded to raw heights in meters. You can use Terrain-RGB. As of December 1, 2021, elevation data updates will not be applied to the Mapbox Terrain-RGB tileset. Data updates will be applied to the Mapbox Terrain-DEM tileset, which is an optimized version of Mapbox Terrain-RGB.. Elevation data The Mapbox Terrain-RGB tileset contains elevation data encoded using API endpoint with your Mapbox access token to get Terrain-RGB tiles:{zoom}/{x}/{y}.pngraw?access_token=YOUR_MAPBOX_ACCESS_TOKEN Then use this equation to decode pixel values to height values: height = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1) For more detailed instructions, see the Access elevation data guide. Changelog - Oct 27, 2016 - Published the tileset.
https://docs.mapbox.com/data/tilesets/reference/mapbox-terrain-rgb-v1/
2022-06-25T01:48:46
CC-MAIN-2022-27
1656103033925.2
[]
docs.mapbox.com
Installation¶ With Composer¶ If your TYPO3 installation is using Composer, install the extension fluid_styled_content by: composer require typo3/cms-fluid-styled-content If you are not working with the latest version of TYPO3 you will need to add a version constraint: composer require typo3/cms-fluid-styled-content:"^10.4" Installing the extension prior to TYPO3 11.4¶ Before TYPO3 11.4 it was still necessary to manually activate extensions installed via Composer using the Extensions module. If you are using TYPO3 with Composer and are using a version of TYPO3 that is older than 11.4, you will need to activate the extension: - Access Admin Tools > Extensions > Installed Extensions fluid_styled_content(note the underscores) - Activate the extension by selecting the Activate button in the column labeled A/D Without Composer¶ If you are working with a legacy installation of TYPO3, this extension will already be part of the installation due to the fact that “classic” .tar & .zip packages come bundled with all system extensions. However, whilst the extension is already downloaded, it might not be activated. To activate the extension fluid_styled_content, navigate to Admin Tools > Extensions > Installed Extensions and fluid_styled_content (note the underscores). If the extension is not active, activate it by selecting the Activate button in the column labeled A/D. Activate the extension by clicking the Activate button. System Maintainer rights are required to activate the extension. Upgrade¶ If you upgrade your TYPO3 CMS installation from one major version to another (for example 10.4 to 11.5), it is advised to run the Upgrade Wizard. It guides you through the necessary steps to upgrade your database records. Open the tool at Admin Tools > Upgrade > Upgrade Wizard and run all suggested steps. Next step¶ Include the default TypoScript template.
https://docs.typo3.org/c/typo3/cms-fluid-styled-content/11.5/en-us/Installation/Index.html
2022-06-25T02:39:12
CC-MAIN-2022-27
1656103033925.2
[array(['../_images/ActivateExtension.png', 'Activate the extension by clicking the Activate button.'], dtype=object) ]
docs.typo3.org
If you have only VLAN-backed logical switches, you can connect the switches to VLAN ports on a tier-0 or tier-1 router so that NSX-T Data Center can provide layer-3 services. Prerequisites Verify that Manager mode is selected in the NSX Manager user interface. See NSX Manager. If you do not see the Policy and Manager mode buttons, see Configure the User Interface Settings. Procedure - With admin privileges, log in to NSX Manager. - Locate the router in or and select it. - Click the Configuration tab and select Router Ports. - Click Add. - Enter a name for the router port and optionally a description. - In the Type field, select Centralized. - For URPF Mode, select Strict or None.URPF (unicast Reverse Path Forwarding) is a security feature. - (Required) Select a logical switch. - Select whether this attachment creates a switch port or updates an existing switch port.If the attachment is for an existing switch port, select the port from the drop-down menu. - Enter the router port IP address in CIDR notation. - Click Add.
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/administration/GUID-BB4D26AD-AF81-4B7C-AA4A-366AEF6F79D2.html
2022-06-25T02:26:43
CC-MAIN-2022-27
1656103033925.2
[]
docs.vmware.com
Software Image Displays a table of software images associated with an enterprise with a default software image selected. From the available list of images, you can change the default software image by selecting the radio button corresponding to the image. Note: The software images associated with an enterprise will be displayed only if the Edge Image Management feature is enabled for the Enterprise. For the selected default image, information such as Software Version, Configuration Type, Orchestrator Address, Heartbeat Interval, Time Slice Interval, and Stats Upload Interval are displayed. Once you change the default image and click Save Changes, a confirmation message with the Edges affected appears. Click Confirm to upgrade the affected edges with the newly selected default image. Note: If an Enterprise (with Edge Image Management feature-enabled) uses a deprecated software image, then the following warning message is displayed at the top of the System Settings page for the enterprise. This enterprise is using a deprecated software image. For security reasons, the Support cannot access or view the user identifiable information. -.
https://docs.vmware.com/en/VMware-SD-WAN-on-AWS-GovCloud-(US)/4.3/VMware-SD-WAN-on-AWS-GovCloud-US-Administration-Guide/GUID-578EC8EF-B18B-4CBF-8EF7-B60FA295E1D2.html
2022-06-25T02:50:10
CC-MAIN-2022-27
1656103033925.2
[array(['images/GUID-D98F898E-78C0-42E2-902D-F9DE1264E5BA-low.png', None], dtype=object) ]
docs.vmware.com
To request RMA reactivation using Zero Touch Provisioning: Procedure - Log in to SD-WAN Orchestrator, and then go to Configure > Edges. - Click the Edge that you want to replace. The Edge Overview page appears. - Scroll down to the RMA Reactivation area, and then click Request Reactivation to generate a new activation key. The status of the Edge changes to Reactivation Pending mode.Note: The reactivation key is valid for one month only. When the key expires, a warning message is displayed. To generate a new key, click Generate New Activation Key. - In the RMA Serial Number field, enter the serial number of the new Edge that is to be activated. - From the RMA Model drop-down list, select the hardware model of the new Edge that is to be activated.Note: If the Serial Number and the hardware model do not match the new Edge that is to be activated, the activation fails. - Click Update.The status of the new Edge changes to Reactivation Pending and the status of the old Edge changes to RMA Requested. To view the Edge State, go to Administration > Zero Touch Provisioning > Assigned. - Complete the following tasks to activate the new Edge: - Disconnect the old Edge from the power and network. - Connect the new Edge to the power and network. Ensure that the Edge is connected to the Internet. Results The new Edge is redirected to the SD-WAN Orchestrator where it is automatically activated. The status of the new Edge changes to Activated. What to do next Return the old Edge to VMware so that the logical entry for the old Edge with the state RMA Requested gets removed from the Administration > Zero Touch Provisioning > Assigned page.
https://docs.vmware.com/en/VMware-SD-WAN/5.0/vmware-sd-wan-partner-guide/GUID-9A781028-A81B-4581-ABA8-CF0A8E9EB336.html
2022-06-25T02:59:15
CC-MAIN-2022-27
1656103033925.2
[]
docs.vmware.com
The following conditions are required. Install the Network Configuration Manager on a dedicated server and do not run other applications on that server. Before you start the installation, ensure that you have proper FQDN set on the server where you are performing the installation. To verify, run the hostname command and check if it is returning the server name with DNS. Before you start the installation, ensure that no instance of Tomcat exists on the target machine.Note: If an instance of Tomcat exists on the machine, you must remove it before proceeding with installation. The following conditions are highly recommended. Before you start the installation, run the prereq-check.pl script to verify that all of the Network Configuration Manager software prerequisites are met.Note: It is strongly recommended to run the prereq-check.pl script on a server before installing any Network Configuration Manager software on that server. For information about obtaining and running prereq-check.pl, see “Running a software prerequisites check” on page 17.
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.1/ncm-installation-guide-10.1.1/GUID-60AAA722-E956-40D1-9BFC-0827330D1F1B.html
2022-06-25T00:44:24
CC-MAIN-2022-27
1656103033925.2
[]
docs.vmware.com
To support and enable the load balancing feature of F5, vRealize Network Insight Cloud has been added with required components or entities. Overview of a F5 Load Balancer and its Components - Application Servers - The machines where the applications are hosted. For example, if you have a web server, your server runs on application servers (physical or a virtual server). - Service Nodes - F5 represents the application servers as service nodes. So, service node has the same IP address or FQDN of the application server. Each service node can have multiple applications. - Pool Members - A logical entity. Each application in a service node is represented by a pool member, which has the same IP address or FQDN of the service node. To identify different applications, the pool members embed the port number with the IP address of the Service Nodes. - Pools - All pool members that serve one application are grouped as a pool. - Virtual Servers - A public facing IP address of the application. So, the clients that want to use an application connects to the virtual server IP address (For example, 10.100.100.10) and port number (80 or 21). - Client Terminal - The connection starts from a client terminal, which is a virtual machine. The client request connects to the virtual server, which decides the pool members based on the pool. Pool members, then, forwards the request to the application server (VM or physical server). Note: A single application server can serve multiple requests from different ports and different service nodes.
https://docs.vmware.com/en/VMware-vRealize-Network-Insight/Cloud/com.vmware.vrni.using.doc/GUID-9D24DC53-2998-4DBB-B7D6-FDE5722EC5A4.html
2022-06-25T01:26:54
CC-MAIN-2022-27
1656103033925.2
[array(['images/GUID-FD5C9D41-7C7D-48A0-8203-33AC44E7F4A7-low.png', None], dtype=object) ]
docs.vmware.com
You can run the non_project_run.tcl script in Vivado® Design Suite batch mode or Tcl mode. - Batch mode runs the sourced script, and then automatically exit the tool after the script has finished processing. - Tcl mode runs the sourced script, and return to the Tcl command prompt when finished. - Change to the directory where the lab materials are stored: - On Linux: cd <Extract_Dir>/lab_4 - Launch the Vivado® Design Suite Tcl shell, and source a Tcl script to create the tutorial design: - On Linux: vivado -mode tcl -source non_project_run.tcl - On Windows, click Vivado Design Suite Tcl shell.to launch the - In the Tcl shell: - Change to the directory where the lab materials are stored: Vivado% cd <Extract_Dir>/lab_4 - Source the Tcl script to create the design: Vivado% source non_project_run.tcl After the sourced script has completed, the Tcl shell displays the Vivado%prompt.Important: If your Tcl script has an error in it, the script will halt execution at the point of the error. You will need to fix the error, and re-source the Tcl script as needed. If you are running in Tcl mode, you may need to close the current project with close_project, or exit the Vivado tool with exit to source the Tcl script again. Running the script results in the creation of a directory called IP. Output products for the various IPs used in the design are written to this directory. Reports, design checkpoints, and a bitstream for the design are also written to disk. - You can open the design in the Vivado IDE to perform further analysis. To open the Vivado IDE from the Tcl prompt type: start_gui.
https://docs.xilinx.com/r/2021.2-English/ug939-vivado-designing-with-ip-tutorial/Source-the-Tcl-Script
2022-06-25T01:43:38
CC-MAIN-2022-27
1656103033925.2
[]
docs.xilinx.com
SPMC¶ This document describes the SPMC (S-EL1) implementation for OP-TEE. More information on the SPMC can be found in the FF-A specification can be found in the FF-A spec. SPMC Responsibilities¶ The SPMC is a critical component in the FF-A flow. Some of its major responsibilities are: - Initialisation and run-time management of the SPs: - The SPMC component is responsible for initialisation of the Secure Partitions (loading the image, setting up the stack, heap, …). - Routing messages between endpoints: - The SPMC is responsible for passing FF-A messages from normal world to SPs and back. It also responsible for passing FF-A messages between SPs. - Memory management: - The SPMC is responsible for the memory management of the SPs. Memory can be shared between SPs and between a SP to the normal world. This document describes OP-TEE as a S-EL1 SPMC. Secure Partitions¶ Secure Partitions (SPs) are the endpoints used in the FF-A protocol. When OP-TEE is used as a SPMC SPs run primarily inside S-EL0. OP-TEE will use FF-A for it transport layer when the OP-TEE CFG_CORE_FFA=y configuration flag is enabled. The SPMC will expose the OP-TEE core, privileged mode, as an secure endpoint itself. This is used to handle all GlobalPlaform programming mode operations. All GlobalPlatform messages are encapsulated inside FF-A messages. The OP-TEE endpoint will unpack the messages and afterwards handle them as standard OP-TEE calls. This is needed as TF-A (S-EL3) does only allow FF-A messages to be passed to the secure world when the SPMD is enabled. SPs run from the initial boot of the system until power down and don’t have any built-in session management compared to GPD TEE TAs. The only means of communicating with the outside world is through messages defined in the FF-A specification. The context of a SP is saved between executions. The Trusted Service repository includes the libsp libary which export all needed functions to build a S-EL0 SP. It also includes many examples of how to create and implement a SP. SPMC Program Flow¶ SP images are stored in the OP-TEE image as early TAs are: the binary images are embedded in OP-TEE Core in a read-only data section. This makes it possible to load SPs during boot and no rich-os is needed in the normal world. ldelf is used to load and initialise the SPs. Starting SPs¶ SPs are loaded and started as the last step in OP-TEE’s initialisation process. This is done by adding sp_init_all() to the boot_final initcall level. Each SP is loaded into the system using ldelf and started. This is based around the same process as loading the early TAs. All SPs are run after they are loaded and run until a FFA_MSG_WAIT is sent by the SP. ![ autoactivate on secure_partition.c -> secure_partition.c: sp_init_uuid() secure_partition.c -> secure_partition.c: sp_open_session() secure_partition.c -> secure_partition.c: find_sp() return secure_partition.c -> secure_partition.c: sp_create_session() return secure_partition.c -> secure_partition.c: ldelf_load_ldelf() return secure_partition.c -> secure_partition.c: ldlelf_init_with_ldelf() return secure_partition.c -> secure_partition.c: sp_init_set_registers() return return secure_partition.c -> secure_partition.c: enter_sp() return secure_partition.c -> secure_partition.c: sp_msg_handler() return return](../_images/plantuml-ab672b396717638c0a119599169d7685a85431b6.png) Once all SPs are loaded and started we return to the SPMD and the Normal World is booted. SP message handling¶ The SPMC is split into 2 main message handlers: When a FFA_MSG_SEND_DIRECT_REQ message is received by the SPMC from the Normal World, a new thread is started. The FF-A message is passed to the thread and it will call the sp_msg_handler() function. Whenever the SPMC ( sp_msg_handler()) receives a message not intended for one of the SPs, it will exit the thread and return to the Normal World passing the FF-A message. Currently only a FFA_MSG_SEND_DIRECT_REQ can be passed from the Normal World to a SP. Every message received by the SPMC from the Normal World is handled in the thread_spmc_msg_recv() function. When entering a SP we need to be running in a OP-TEE thread. This is needed to be able to push the TS session (We push the TS session to get access to the SP memory). Currently the only possibility to enter a SP from the Normal world is via a FFA_MSG_SEND_DIRECT_REQ. Whenever we receive a FFA_MSG_SEND_DIRECT_REQ message which doesn’t have OP-TEE as the endpoint-id, we start a thread and forward the FF-A message to the sp_msg_handler(). The sp_msg_handler() is responsible for all messages coming or going to/from a SP. It runs in a while loop and will handle every message until it comes across a messages which is not intended for the secure world. After a message is handled by the SPMC or when it needs to be forwarded to a SP, sp_enter() is called. sp_enter() will copy the FF-A arguments and resume the SP. When the SPMC needs to have access to the SPs memory, it will call ts_push_current_session() to gain access and ts_pop_current_session() to release the access. Running and exiting SPs¶ The SPMC resumes/starts the SP by calling the sp_enter(). This will set up the SP context and jump into S-EL0. Whenever the SP performs a system call it will end up in sp_handle_svc(). sp_handle_svc() stores the current context of the SP and makes sure that we don’t return to S-EL0 but instead returns to S-EL1 back to sp_enter(). sp_enter() will pass the FF-A registers (x0-x7) to spmc_sp_msg_handler(). This will process the FF-A message. RxTx buffer managment¶ RxTx buffers are used by the SPMC to exchange information between an endpoint and the SPMC. The rxtx_buf struct is used by the SPMC for abstracting buffer management. Every SP has a struct rxtx_buf wich will be passed to every function that needs access to the rxtx buffer. A separate struct rxtx_buf is defined for the Normal World, which gives access to the Normal World buffers. Configuration¶ Adding SPs to the Image¶ The following flags have to be enabled to enable the SPMC. The SP images themself are loaded by using the SP_PATHS flag. These should be added to the OP-TEE configuration inside the OP-TEE/build.git directory. OPTEE_OS_COMMON_FLAGS += CFG_CORE_FFA=y # Enable the FF-A transport layer OPTEE_OS_COMMON_FLAGS += SP_PATHS="path/to/sp-xxx.elf path/to/sp-yyy.elf" # Add the SPs to the OP-TEE image TF_A_FLAGS += SPD=spmd SPMD_SPM_AT_SEL2=0 # Build TF-A with the SPMD enabled and without S-EL2
https://optee.readthedocs.io/en/3.16.0/architecture/spmc.html
2022-06-25T01:39:37
CC-MAIN-2022-27
1656103033925.2
[]
optee.readthedocs.io
In this section, we will learn Fourier Transform is used to analyze the frequency characteristics of various filters. For images, 2D Discrete Fourier Transform (DFT) is used to find the frequency domain. A fast algorithm called Fast Fourier Transform (FFT) is used for calculation of DFT. Details about these can be found in any image processing or signal processing textbooks. Please see Additional Resources_ section. For a sinusoidal signal, \(x(t) = A \sin(2 \pi ft)\), we can say \(f\) is the frequency of signal, and if its frequency domain is taken, we can see a spike at \(f\). If signal is sampled to form a discrete signal, we get the same frequency domain, but is periodic in the range \([- \pi, \pi]\) or \([0,2\pi]\) (or \([0,N]\) for N-point DFT). You can consider an image as a signal which is sampled in two directions. So taking fourier transform in both X and Y directions gives you the frequency representation of image. More intuitively, for the sinusoidal signal, if the amplitude varies so fast in short time, you can say it is a high frequency signal. If it varies slowly, it is a low frequency signal. You can extend the same idea to images. Where does the amplitude varies drastically in images ? At the edge points, or noises. So we can say, edges and noises are high frequency contents in an image. If there is no much changes in amplitude, it is a low frequency component. ( Some links gr to bring it to center, you need to shift the result by \(\frac{N}{2}\) in both the directions. This is simply done by the function, np.fft.fftshift(). (It is more easier to analyze). Once you found the frequency transform, you can find the magnitude spectrum. Result look like below: See, You can see more whiter region at the center showing low frequency content is more. So you found the frequency transform Now you can do some operations in frequency domain, like high pass filtering and reconstruct the image, ie find inverse DFT. For that you simply remove the low frequencies by masking with a rectangular window of size 60x60. Then apply the inverse shift using np.fft.ifftshift() so that DC component again come at the top-left corner. Then find inverse FFT using np.ifft2() function. The result, again, will be a complex number. You can take its absolute value. Result look like below: The result shows High Pass Filtering is an edge detection operation. This is what we have seen in Image Gradients chapter. This also shows that most of the image data is present in the Low frequency region of the spectrum. Anyway we have seen how to find DFT, IDFT etc in Numpy. Now let's see how to do it in OpenCV. If you closely watch the result, especially the last image in JET color, you can see some artifacts (One instance I have marked in red arrow). It shows some ripple like structures there, and it is called ringing effects. It is caused by the rectangular window we used for masking. This mask is converted to sinc shape which causes this problem. So rectangular windows is not used for filtering. Better option is Gaussian Windows. OpenCV provides the functions cv2.dft() and cv2.idft() for this. It returns the same result as previous, but with two channels. First channel will have the real part of the result and second channel will have the imaginary part of the result. The input image should be converted to np.float32 first. We will see how to do it. So, now we have to do inverse DFT. In previous session, we created a HPF, this time we will see how to remove high frequency contents in the image, ie we apply LPF to image. It actually blurs the image. For this, we create a mask first with high value (1) at low frequencies, ie we pass the LF content, and 0 at HF region. See the result: Performance of DFT calculation is better for some array size. It is fastest when array size is power of two. The arrays whose size is a product of 2’s, 3’s, and 5’s are also processed quite efficiently. So if you are worried about the performance of your code, you can modify the size of the array to any optimal size (by padding zeros) before finding DFT. For OpenCV, you have to manually pad zeros. But for Numpy, you specify the new size of FFT calculation, and it will automatically pad zeros for you. So how do we find this optimal size ? OpenCV provides a function, cv2.getOptimalDFTSize() for this. It is applicable to both cv2.dft() and np.fft.fft2(). Let's check their performance using IPython magic command timeit. See, the size (342,548) is modified to (360, 576). Now let's pad it with zeros (for OpenCV) and find their DFT calculation performance. You can do it by creating a new big zero array and copy the data to it, or use cv2.copyMakeBorder(). OR: Now we calculate the DFT performance comparison of Numpy function: It shows a 4x speedup. Now we will try the same with OpenCV functions. It also shows a 4x speed-up. You can also see that OpenCV functions are around 3x faster than Numpy functions. This can for some higher size of FFT. Analyze it: See the result: From image, you can see what frequency region each kernel blocks, and what region it passes. From that information, we can say why each kernel is a HPF or a LPF
https://docs.opencv.org/3.1.0/de/dbc/tutorial_py_fourier_transform.html
2019-12-05T18:39:37
CC-MAIN-2019-51
1575540481281.1
[]
docs.opencv.org
Please check the build logs and, if you believe this is docs.rs' fault, open an issue. semver::Version; assert!(Version::parse("1.2.3") == Ok(Version { major: 1u, minor: 2u, patch: 3u,:
https://docs.rs/crate/semver/0.1.4
2019-12-05T17:55:40
CC-MAIN-2019-51
1575540481281.1
[]
docs.rs
Hardware Architecture¶ Ledger devices have a very unique architecture in order to leverage the security of the Secure Element while still being able to interface with many different peripherals such as the screen, buttons, the host computer over USB, or Bluetooth & NFC in the case of the Ledger Blue. In order to accomplish this, we attached an additional STM32 microcontroller (“the MCU”) to the Secure Element (“the SE”) which acts as a “dumb router” between the Secure Element and the peripherals. The microcontroller doesn’t perform any application logic and it doesn’t store any of the cryptographic secrets used by BOLOS, it simply manages the peripherals and notifies the Secure Element whenever new data is ready to be received. BOLOS applications are executed entirely on the Secure Element. In this section, we’ll take a look at the hardware architecture to better embrace the hardware related constraints before analyzing their software implications. Multiple Processors: Secure Element Proxy¶ A detailed BOLOS architecture diagram BOLOS is split between two hardware chips, one being secure (the ST31 Secure Element), and the other having JTAG enabled and acting as a proxy (the STM32 MCU). Furthermore, the Secure Element is also split into two parts: the firmware which is under NDA and is therefore closed-source, and the SDK & application-loaded code which is open source friendly. The BOLOS firmware is responsible for low-level I/O operations and implements the SE-MCU link (though the handling of the protocol between the SE and the MCU is done by the currently running app). BOLOS relies on the collaboration of both chips to empower Secure Element applications. At first glance, and even at second and all following, the Secure Element is a very powerful piece of hardware but lacks inputs / outputs. In our architecture, we solved this problem by appending the MCU which is full of inputs / outputs so it can act as a proxy for the Secure Element to explore new horizons. In a sense, the MCU can be seen as a supercharged coprocessor of the Secure Element. Not considering security implications (which is beyond the scope of this section), and thanks to a simple asynchronous protocol, the Secure Element drives the proxy. The SE-MCU link protocol is called SEPROXYHAL or SEPH in source code and documentation. The “HAL” stands for Hardware Abstraction Layer. SEPROXYHAL¶ The SEPROXYHAL protocol is structured as a serialized list of three types of packets: Events, Commands, and Statuses. Since SEPROXYHAL is the only channel for the SE to communicate with the outside world, if there is an error at the protocol level (such as the order or formatting of Events / Commands / Statuses getting messed up), then the SE ends up completely isolated and unable to communicate. When developing an application this is typically the most common failure scenario. If this happens, the device must be rebooted to reset the SEPROXYHAL protocol state. Hopefully, multiple levels of software guards are implemented to avoid such cases. The protocol works as follows: - The MCU sends an Event (button press, ticker, USB transfer, …). - The SE responds with a list of zero or more Commands in response to the Event. - The SE sends a Status indicating that the Event is fully processed and waits for another Event. SEPROXYHAL protocol concept As a matter of fact, due to buffer size, requests to display something to the screen are sent using a Status. When the MCU has finished processing the Display Status, it issues a Display Processed Event indicating that it is ready to receive another Display Status. As a result, displaying multiple elements on the screen (in order to build an entire user interface) must be done asynchronously from the core application logic. This process is facilitated by a UX helper implemented in the SDK, which will be discussed further in the next chapter. The SE throws an exception to applications willing to send more than one Status in a row, without a new Event being fetched in between.
https://ledger.readthedocs.io/en/latest/bolos/hardware_architecture.html
2019-12-05T18:31:00
CC-MAIN-2019-51
1575540481281.1
[array(['../_images/bolos_architecture.png', 'Detailed BOLOS architecture'], dtype=object) array(['../_images/seproxyhal.png', 'SEPROXYHAL protocol concept'], dtype=object) ]
ledger.readthedocs.io
Returns information about a specific code signing job. You specify the job by using the jobId value that is returned by the StartSigningJob operation. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. describe-signing-job --job-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --job-id (string) The ID of the signing job on details about a signing job The following describe-signing-job example displays details about the specified signing job. aws signer describe-signing-job \ --job-id 2065c468-73e2-4385-a6c9-0123456789abc Output: { "status": "Succeeded", "completedAt": 1568412037, "platformId": "AmazonFreeRTOS-Default", "signingMaterial": { "certificateArn": "arn:aws:acm:us-west-2:123456789012:certificate/6a55389b-306b-4e8c-a95c-0123456789abc" }, "statusReason": "Signing Succeeded", "jobId": "2065c468-73e2-4385-a6c9-0123456789abc", "source": { "s3": { "version": "PNyFaUTgsQh5ZdMCcoCe6pT1gOpgB_M4", "bucketName": "signer-source", "key": "MyCode.rb" } }, "profileName": "MyProfile2", "signedObject": { "s3": { "bucketName": "signer-destination", "key": "signed-2065c468-73e2-4385-a6c9-0123456789abc" } }, "requestedBy": "arn:aws:iam::123456789012:user/maria", "createdAt": 1568412036 } jobId -> (string) The ID of the signing job on output. source -> (structure) The object that contains the name of your S3 bucket or your raw code. s3 -> (structure) The S3Source object. bucketName -> (string)Name of the S3 bucket. key -> (string)Key name of the bucket object that contains your unsigned code. version -> (string)Version of your source image in your version enabled S3 bucket. Amazon Resource Name (ARN) of your code signing certificate. certificateArn -> (string)The Amazon Resource Name (ARN) of the certificates that is used to sign your code. platformId -> (string) The microcontroller platform to which your signed code image will be distributed. profileName -> (string) The name of the profile that initiated the signing operation. overrides -> (structure) A list of any overrides that were applied to the signing operation. A signing configuration that overrides the default encryption or hash algorithm of a signing job. encryptionAlgorithm -> (string)A specified override of the default encryption algorithm that is used in a code signing job. hashAlgorithm -> (string)A specified override of the default hash algorithm that is used in a code signing job. Map of user-assigned key-value pairs used during signing. These values contain any information that you specified for use in your signing job. key -> (string) value -> (string) createdAt -> (timestamp) Date and time that the signing job was created. completedAt -> (timestamp) Date and time that the signing job was completed. requestedBy -> (string) The IAM principal that requested the signing job. status -> (string) Status of the signing job. statusReason -> (string) String value that contains the status reason. signedObject -> (structure) Name of the S3 bucket where the signed code image is saved by code signing. s3 -> (structure) The S3SignedObject . bucketName -> (string)Name of the S3 bucket. key -> (string)Key name that uniquely identifies a signed code image in your bucket.
https://docs.aws.amazon.com/cli/latest/reference/signer/describe-signing-job.html
2019-12-05T17:31:36
CC-MAIN-2019-51
1575540481281.1
[]
docs.aws.amazon.com
Rule Write Attribute Class Definition type RuleWriteAttribute = class inherit RuleReadWriteAttribute Public NotInheritable Class RuleWriteAttribute Inherits RuleReadWriteAttribute - Inheritance - RuleWriteAttribute - Attributes - Remarks This attribute is used to support a forward chaining model that causes the re-evaluation of rules based on changes to fields and properties. The RuleReadAttribute and RuleWriteAttribute indicate the properties read or written by the property or method the attribute is applied to. RuleInvokeAttributes are used to indicate that this property or method uses other methods, which must also be checked for dependencies.
https://docs.microsoft.com/en-us/dotnet/api/system.workflow.activities.rules.rulewriteattribute?view=netframework-4.8
2019-12-05T18:31:04
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Problem The New Relic Infrastructure agent is consuming too much CPU. Solution The New Relic Infrastructure agent is designed to report a broad range of system data with minimal CPU and memory consumption. However, if you have a need to reduce your CPU consumption, you can disable or decrease the sampling frequency of various samplers and plugins. This topic highlights some newrelic-infra.yml configurations that may reduce your CPU usage: Reduce event sampling The Infrastructure agent reports several default events at specific frequencies. To lower the overhead, you can reduce the sampling frequency in seconds, or you can completely disable the samplers by setting the corresponding property value to -1. We don't recommend a sample rate larger than 60 seconds because you may see gaps in the New Relic user interface charts. The table below lists some samplers to configure: Reduce agent plugin reporting The Infrastructure agent has built-in plugins that collect inventory data (specific system configuration and state information). For some systems, the CPU consumption may be relatively high if the plugins are gathering a lot of data. To reduce the footprint, you can disable or decrease the sampling frequency for specific plugins that report data you don’t want. - How to enable and disable plugins Disable a single plugin: To disable a plugin, set the corresponding property value to -1. Disable all plugins: disable_all_plugins: true Enable selected plugins: To enable certain plugins, insert an exception in disable_all_plugins. For example, the following configuration disables all plugins, but the Network Interfaces plugin reports every 120 seconds: disable_all_plugins: true network_interface_interval_sec: 120 - Disable SELinux semodule -l(Linux only) The SELinux plugin periodically invokes the semodule -lsystem command to get information about the existing SELinux modules. In most CentOS/RedHat distributions, this command will generate CPU consumption peaks. To disable this functionality, insert the following configuration option in your /etc/newrelic-infra.ymlfile: selinux_enable_semodule: false - Reduce or disable Sysctl (Linux only) The Sysctl plugin walks the whole /sysdirectory structure and reads values from all the files there. Disabling it or reducing the interval may decrease some CPU System time in the Infrastructure agent. You can disable inventory frequency by setting it to a negative number or reduce the frequeny by setting the sysctl_interval_secconfiguration value to the number of seconds between consecutive executions of the plugin. For example, to execute the plugin once every 10 minutes: sysctl_interval_sec: 600 To disable the Sysctl plugin: sysctl_interval_sec: -1 The current default value for the sysctl_interval_secproperty is 60. - Additional plugins to reduce or disable The following inventory plugins are not especially CPU consuming, but you can still reduce their frequency or disable them by setting the corresponding configuration options. Linux plugins For configuration of these Linux plugins, see Plugin variables: - Cloud Security Groups - Daemon Tools - DPKG - Facter - Kernel Modules - Network interfaces - RPM - SELinux - Supervisord - Sysctl - Systemd - SysV - Upstart - Users - SSHD configuration Windows plugins For configuration of these Windows plugins, see Plugin variables: - Network interfaces - Windows services - Windows updates Review on-host integrations If you use Infrastructure on-host integrations, this may have additional impacts on CPU usage. The nature of the impact and the methods to adjust the impact depend on the integration you're using. Here are some ways to adjust on-host integration CPU usage: - See if your integration has configuration options you can adjust. If possible, spread out the monitoring load by adding additional Infrastructure agents. For example, the Kafka integration allows a multi-agent deployment.
https://docs.newrelic.com/docs/infrastructure/new-relic-infrastructure/troubleshooting/reduce-infrastructure-agents-cpu-footprint
2019-12-05T18:44:04
CC-MAIN-2019-51
1575540481281.1
[]
docs.newrelic.com
Since 2017 we’ve designed multiple badges. This section lists the badges BADGE.TEAM developed or helped developing. These badges were made in collaboration with BADGE.TEAM These badges were not developed by us, but we’ve added support for them to our ESP32 platform firmware. Our efforts for these badges are more of an “after market upgrade” so to say… The CARD10 uses the hatchery as it’s app repository. For all other details about this project (the hardware, firmware and API) please have a look at the CARD10 project over at the CCC website. An incomplete but slowly growing list of event badges and their derrivatives. Help us extend this list by pointing us towards badges that are missing.
https://docs.badge.team/badges/
2019-12-05T17:00:59
CC-MAIN-2019-51
1575540481281.1
[]
docs.badge.team
Manage access to Azure management with Conditional Access Caution Make sure you understand how Conditional Access works before setting up a policy to manage access to Azure management. Make sure you don't create conditions that could block your own access to the portal. Conditional Access in Azure Active Directory (Azure AD) controls access to cloud apps based on specific conditions that you specify. To allow access, you create Conditional Access policies that allow or block access based on whether or not the requirements in the policy are met. Typically, you use Conditional Access to control access to your cloud apps. You can also set up policies to control access to Azure management based on certain conditions (such as sign-in risk, location, or device) and to enforce requirements like multi-factor authentication. To create a policy for Azure management, you select Microsoft Azure Management under Cloud apps when choosing the app to which to apply the policy. The policy you create applies to all Azure management endpoints, including the following: - Azure portal - Azure Resource Manager provider - Classic Service Management APIs - Azure PowerShell - Visual Studio subscriptions administrator portal - Azure DevOps - Azure Data Factory portal Note that the policy applies to Azure PowerShell, which calls the Azure Resource Manager API. It does not apply to Azure AD PowerShell, which calls Microsoft Graph. For more information on how to set up and use Conditional Access, see Conditional Access in Azure Active Directory. Feedback
https://docs.microsoft.com/en-us/azure/role-based-access-control/conditional-access-azure-management
2019-12-05T18:10:21
CC-MAIN-2019-51
1575540481281.1
[array(['media/conditional-access-azure-management/conditional-access-azure-mgmt.png', 'Conditional Access for Azure management'], dtype=object) ]
docs.microsoft.com
Manage compute in Azure SQL Data Warehouse Learn about managing compute resources in Azure SQL Data Warehouse. Lower costs by pausing the data warehouse, or scale the data warehouse to meet performance demands. What is compute management? The architecture of SQL Data Warehouse separates storage and compute, allowing each to scale independently. As a result, you can scale compute to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural consequence of this architecture is that billing for compute and storage is separate. If you don't need to use your data warehouse for a while, you can save compute costs by pausing compute. Scaling compute You can scale out or scale back compute by adjusting the data warehouse units setting for your data warehouse. Loading and query performance can increase linearly as you add more data warehouse units. For scale-out steps, see the Azure portal, PowerShell, or T-SQL quickstarts. You can also perform scale-out operations with a REST API. To perform a scale operation, SQL Data Warehouse first kills all incoming queries and then rolls back transactions to ensure a consistent state. Scaling only occurs once the transaction rollback is complete. For a scale operation, the system detaches the storage layer from the Compute nodes, adds Compute nodes, and then reattaches the storage layer to the Compute layer. Each data warehouse is stored as 60 distributions, which are evenly distributed to the Compute nodes. Adding more Compute nodes adds more compute power. As the number of Compute nodes increases, the number of distributions per compute node decreases, providing more compute power for your queries. Likewise, decreasing data warehouse units reduces the number of Compute nodes, which reduces the compute resources for queries. The following table shows how the number of distributions per Compute node changes as the data warehouse units change. DWU6000 provides 60 Compute nodes and achieves much higher query performance than DWU100. Finding the right size of data warehouse units To see the performance benefits of scaling out, especially for larger data warehouse units, you want to use at least a 1-TB data set. To find the best number of data warehouse units for your data warehouse, try scaling up and down. Run a few queries with different numbers of data warehouse units after loading your data. Since scaling is quick, you can try various performance levels in an hour or less. Recommendations for finding the best number of data warehouse units: - For a data warehouse in development, begin by selecting a smaller number of data warehouse units. A good starting point is DW400 or DW200. - Monitor your application performance, observing the number of data warehouse units selected compared to the performance you observe. - Assume a linear scale, and determine how much you need to increase or decrease the data warehouse units. - Continue making adjustments until you reach an optimum performance level for your business requirements. When to scale out Scaling out data warehouse units impacts these aspects of performance: - Linearly improves performance of the system for scans, aggregations, and CTAS statements. - Increases the number of readers and writers for loading data. - Maximum number of concurrent queries and concurrency slots. Recommendations for when to scale out data warehouse units: - Before you perform a heavy data loading or transformation operation, scale out to make the data available more quickly. - During peak business hours, scale out to accommodate larger numbers of concurrent queries. What if scaling out does not improve performance? Adding data warehouse units increasing the parallelism. If the work is evenly split between the Compute nodes, the additional parallelism improves query performance. If scaling out is not changing your performance, there are some reasons why this might happen. Your data might be skewed across the distributions, or queries might be introducing a large amount of data movement. To investigate query performance issues, see Performance troubleshooting. Pausing and resuming compute Pausing compute causes the storage layer to detach from the Compute nodes. The Compute resources are released from your account. You are not charged for compute while compute is paused. Resuming compute reattaches storage to the Compute nodes, and resumes charges for Compute. When you pause a data warehouse: - Compute and memory resources are returned to the pool of available resources in the data center - Data warehouse unit costs are zero for the duration of the pause. - Data storage is not affected and your data stays intact. - SQL Data Warehouse cancels all running or queued operations. When you resume a data warehouse: - SQL Data Warehouse acquires compute and memory resources for your data warehouse units setting. - Compute charges for your data warehouse units resume. - Your data becomes available. - After the data warehouse is online, you need to restart your workload queries. If you always want your data warehouse accessible, consider scaling it down to the smallest size rather than pausing. For pause and resume steps, see the Azure portal, or PowerShell quickstarts. You can also use the pause REST API or the resume REST API. Drain transactions before pausing or scaling We recommend allowing existing transactions to finish before you initiate a pause or scale operation., and Optimizing transactions. Automating compute management To automate the compute management operations, see Manage compute with Azure functions. Each of the scale-out, pause, and resume operations can take several minutes to complete. If you are scaling, pausing, or resuming automatically, we recommend implementing logic to ensure that certain operations have completed before proceeding with another action. Checking the data warehouse state through various endpoints allows you to correctly implement automation of such operations. To check the data warehouse state, see the PowerShell or T-SQL quickstart. You can also check the data warehouse state with a REST API. Permissions Scaling the data warehouse requires the permissions described in ALTER DATABASE. Pause and Resume require the SQL DB Contributor permission, specifically Microsoft.Sql/servers/databases/action. Next steps See the how to guide for manage compute Another aspect of managing compute resources is allocating different compute resources for individual queries. For more information, see Resource classes for workload management. Feedback
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-compute-overview
2019-12-05T17:21:44
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Owner and Admins When upgrading or downgrading your subscription, the billing amount depends on the product level and pricing model you select. New Relic allows you to select a monthly or annual commitment for your account by compute-units or host-based pricing. Evaluate subscription options To help you determine which pricing option is best for you, the Subscription page in New Relic's user interface includes an estimator based on usage data gathered during your trial period. You can also view compute-units usage or host-based usage through New Relic's user interface at any time. Change product subscription Owner and Admins Depending on whether you select compute-units pricing or host-based pricing, the self-service options for the Subscription page in New Relic's user interface change. However, the basic workflow is the same. To view, upgrade, or downgrade the New Relic product levels and pricing options for your account's subscription: - Go to rpm.newrelic.com > (account dropdown) > Account settings > Account > Subscription. - New subscriptions: Review the options for Dynamic (compute-units pricing) or Dedicated (host-based pricing) environments, and select Start here. - Continue with the self-service workflow for compute-units pricing or host-based pricing as applicable. - Review the available options for pricing level, commitment or duration, and number of compute units or hosts, and select your choices. - If applicable, select the option to apply a promotional code. - Select Checkout or Save subscription changes as applicable. - Complete the process to confirm your New Relic account's credit card. - Agree to New Relic's Terms of Service and Supplemental Payment Terms as appropriate. If you downgrade your account subscription, you may also see an optional New Relic survey. Switch between host-based and compute-units pricing For information about changing your account from host-based pricing to compute-units pricing (or vice versa), contact your account representative at New Relic, or contact New Relic's Billing Department. Change partner account subscription For information about upgrading or downgrading a New Relic account originally created through a partnership offer, contact your account representative at New Relic, or contact New Relic's Billing Department. For more help Additional documentation resources include: - Account pricing and billing information (use self-service options from the account dropdown in the UI, and find resources to change your billing or product subscription level) - View or change account tax information (update the location New Relic uses to apply sales tax to your account)
https://docs.newrelic.com/docs/accounts/accounts/subscription-pricing/upgrade-or-downgrade-your-new-relic-subscriptions
2019-12-05T18:21:06
CC-MAIN-2019-51
1575540481281.1
[]
docs.newrelic.com
How to augment event data using check hooks What are check hooks? Check hooks are commands run by the Sensu agent in response to the result of check command execution. The Sensu agent executes the appropriate configured hook, depending on the exit status code (e.g., 1). Why use check hooks? Check hooks allow Sensu users to automate data collection routinely performed by operators investigating monitoring alerts, freeing precious operator time! While check hooks can be used for rudimentary auto-remediation tasks, they are intended for enrichment of monitoring event data. Using check hooks to gather context The purpose of this guide is to help you put in place a check hook which captures the process tree in the event that an nginx_process check returns a status of 2 (critical, not running). Creating the hook The first step is to create a new hook that runs a specific command to capture the process tree. We can set an execution timeout of 10 seconds for this command. sensuctl hook create process_tree \ --command 'ps aux' \ --timeout 10 Assigning the hook to a check Now that the process_tree hook has been created, it can be assigned to a check. Here we apply our hook to an already existing nginx_process check. By setting the type to critical, we ensure that whenever the check command returns a critical status, Sensu executes the process_tree hook and adds the output to the resulting event data. sensuctl check set-hooks nginx_process \ --type critical \ --hooks process_tree Validating the check hook You can verify the proper behavior of the check hook against a specific event by using sensuctl. It might take a few moments, once the check hook is assigned, } ], [...] } } Having confirmed that the hook is attached to our check, we can stop Nginx and observe the check hook in action on the next check execution. Here we use sensuctl to query event info and send the response to jq so] Note that the above output, although truncated in the interest of brevity, reflects the output of the ps aux command specified in the check hook we created. Now when we are alerted that Nginx is not running, we can review the check hook output to confirm this was the case, without ever firing up an SSH session to investigate! Next steps You now know how to run data collection tasks using check hooks. From this point, here are some recommended resources: - Read the hooks reference for in-depth documentation on hooks.
https://docs.sensuapp.org/sensu-go/5.4/guides/enrich-events-with-hooks/
2019-12-05T18:11:23
CC-MAIN-2019-51
1575540481281.1
[]
docs.sensuapp.org
Pine the places he buried the nuts. This required an enormous amount of energy and time, simply because The Lazy Dog Hacienda is on more than 20 acres of land. It is hard, Reggie thought, to remember where one puts all those walnuts, pecans, and acorns, since the land looks the same. Still I do have one way I remember where some of those walnuts, pecans, and acorns are buried….. Your email address will not be published. Name (required) Website Save my name, email, and website in this browser for the next time I comment. Current ye@r * Leave this field empty
https://www.hickorydocstales.com/docs-dog-days-reggie-and-pine-cones/
2019-12-05T17:40:17
CC-MAIN-2019-51
1575540481281.1
[]
www.hickorydocstales.com
About Log Shipping (SQL Server) SQL Server (Windows only) Azure SQL Database Azure Synapse Analytics (SQL DW) Parallel Data Warehouse: Benefits.. monitor server An optional instance of SQL Server that tracks all of the details of log shipping, including: When the transaction log on the primary database was last backed up. When the secondary servers last copied and restored the backup files. Information about any backup failure alerts. Important Once the monitor server has been configured, it cannot be changed without removing log shipping first. backup job A SQL Server Agent job that performs the backup operation, logs history to the local server and the monitor server, and deletes old backup files and history information. When log shipping is enabled, the job category "Log Shipping Backup" is created on the primary server instance.. alert job A SQL Server Agent job that raises alerts for primary and secondary databases when a backup or restore operation does not complete successfully within a specified threshold. When log shipping is enabled on a database, job category "Log Shipping Alert" is created on the monitor server instance. Tip For each alert, you need to specify an alert number. Also, be sure to configure the alert to notify an operator when an alert is raised. Log Shipping Overview<< Interoperability Log shipping can be used with the following features or components of SQL Server: Prerequisites for Migrating from Log Shipping to Always On Availability Groups (SQL Server) Database Mirroring and Log Shipping (SQL Server) Log Shipping and Replication (SQL Server) Note Always On availability groups and database mirroring are mutually exclusive. A database that is configured for one of these features cannot be configured for the other. Related Tasks Upgrading Log Shipping to SQL Server 2016 ) See Also Overview of Always On Availability Groups (SQL Server) Feedback
https://docs.microsoft.com/en-us/sql/database-engine/log-shipping/about-log-shipping-sql-server?view=sql-server-2017
2019-12-05T17:58:17
CC-MAIN-2019-51
1575540481281.1
[array(['media/ls-typical-configuration.gif?view=sql-server-2017', 'Configuration showing backup, copy, & restore jobs Configuration showing backup, copy, & restore jobs'], dtype=object) ]
docs.microsoft.com
The Layout Editor mode is your go-to tool when you want to edit your user interface. Provided you have the necessary permission, you will see a paintbrush icon at the top-right of your interface. This switches the Layout Editor mode ON. From this point on, you can edit your layout and collections settings. Read on to discover what you can do. Some collections might not be relevant for your operational team. When the Edit mode is ON (1), you will see an eye icon on the left of each collection (2). If this eye is colored, it means that the collection is shown. You can click on it to toggle show/hide a specific collection (3). The same behavior applies to the columns (or fields) of a specific collection. When the Edit mode is ON, you will see at the very left end of all column headers a small Cog on an orange background (1). If you click on it you will be able to reorder the fields (2) and hide/show the desired ones (3). When the Edit mode is ON, the editable elements are surrounded by dotted lines (2). If this the case, then you can change the order of the elements by drag-and-dropping them. After logging in, Forest Admin automatically redirects you to the dashboard by default. It is possible to change the default tab for the "Data" tab. To do that: Activate the layout editor Change the position of the "Data" tab
https://docs.forestadmin.com/documentation/reference-guide/views/using-the-layout-editor-mode
2019-12-05T17:08:54
CC-MAIN-2019-51
1575540481281.1
[]
docs.forestadmin.com
Download SQL Server Management Studio ).4)) - Azure Data Architecture Guide Contribute to SQL documentation Did you know that you could edit the content yourself? If you do so, not only will our documentation improve, but you'll also be credited as a contributor to the page. Feedback
https://docs.microsoft.com/en-au/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-2017
2019-12-05T18:30:29
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Delete a project Azure DevOps Services | Azure DevOps Server 2019 | TFS 2018 | TFS 2017 | TFS 2015 | TFS 2013 In this article, learn how to delete a project from Azure DevOps. Deleting a project helps simplify the navigation to projects that are only in use. Caution Projects are permanently deleted if not restored within 28 days. For more information on restoring projects, see Restore a project. If you want to access project data while the project is deleted (without restoring it) you should save project data. Delete project Organization settings. Select Projects, and then check one or more projects to delete. Select Delete. Confirm deletion by entering the project name, and then select Delete in the popup screen. Your project is deleted and can be restored up to 28 days afterward. Open organization settings Organization settings configure resources for all projects or the entire organization. For an overview of all organization settings, see Project collection administrator role and managing collections of projects. Choose the Azure DevOps logo to open Projects, and then choose Collection settings. Select a service from the sidebar. Settings are organized based on the service they support. Expand or collapse the major sections such as Boards and Pipelines to choose a page. Choose the Azure DevOps logo to open Projects, and then choose Organization settings. Select a service from the sidebar. Settings are organized based on the service they support. Expand or collapse the major sections such as Boards and Pipelines to choose a page. Choose the gear icon to open Collection Settings. From there, you can choose a page. Settings are organized based on the service they support. Delete a project from TFS Using the administration console, you can delete a project from a project collection. Afterwards, you'll need to manually delete any associated reports and SharePoint project portal. Or, you can use the TFSDeleteProject command line tool to delete all artifacts. If you're not a member of one or more of the following administrator groups, get permissions now: Team Foundation Administrators group (required). SQL Server System Administrators group (required). Farm Administrators group for SharePoint Products (required when your deployment uses SharePoint Products). Open the administration console for TFS and delete the project from its project collection. Choose whether to delete external data associated with the project and then initiate the delete action. (Optional) To review the status of the delete action, open the Status tab. To review the details of the delete action, you can open the log file from either the Status tab or Logs tab. Delete reports that remain after deleting a project If your on-premises project used reporting, and you didn't choose to delete external artifacts, you can delete the reports using SQL Server Report Manager. From the project collection page, delete the folder that corresponds to the deleted project. Remove the project portal If your on-premises project had a project portal, all links to that portal are removed from TWA and Team Explorer, but the SharePoint site or website that acted as the portal is not deleted. If you want to delete the portal, you must do so manually after the project has been deleted. See How to: Create, Edit, and Delete Windows SharePoint Services Sites. What to do if the delete action doesn't finish Review the status and log files for the delete action. Open the Status tab and for Deleted, review the additional information in parentheses, and take the indicated action. (Processing) means that the process has started and is in progress. (Pending) means that the deletion process has started from a client application. The deletion might be in progress or might have failed. Because the process was started from a client application, the server cannot accurately report the status of the deletion. If a project deletion remains pending for a long time, try to delete the project again from the administration console. (Failed) means that the deletion process started but did not successfully finish. The log file contains specific information about the failure. Review the information about the failure, and then try to delete the project again. If partial data remains, you can also use the TFSDeleteProject command line tool. Related articles Feedback
https://docs.microsoft.com/en-us/azure/devops/organizations/projects/delete-project?view=azure-devops
2019-12-05T17:48:05
CC-MAIN-2019-51
1575540481281.1
[array(['_img/delete-project/ic686857.png?view=azure-devops', 'context menu with delete command'], dtype=object) ]
docs.microsoft.com
Base Packaging Policy. Acquire Stream For Link Targets Method Definition When overridden in a derived class, gets a list of strings, each expressing a LinkTarget element. public: abstract System::Collections::Generic::IList<System::String ^> ^ AcquireStreamForLinkTargets(); public abstract System.Collections.Generic.IList<string> AcquireStreamForLinkTargets (); abstract member AcquireStreamForLinkTargets : unit -> System.Collections.Generic.IList<string> Public MustOverride Function AcquireStreamForLinkTargets () As IList(Of String) Returns Remarks Use the list to compose LinkTarget elements that can be inserted into the PageContent.LinkTargets element for the corresponding page. Wrap each string in the list in markup by using the following form: <LinkTarget Name=\" target name \" />, where target name is the string. For more information about the <PageContent.LinkTargets> and LinkTarget elements, see chapter 3 in the XML Paper Specification (XPS) specification, which you can obtain at XPS: Specification and License Downloads.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.xps.serialization.basepackagingpolicy.acquirestreamforlinktargets?view=netframework-4.8
2019-12-05T18:14:36
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
If you are using WooCommerce Product Add-Ons (), you will need the following code snippet to include the product add-on price as well for the discount. Snippet 1: Use the following snippet if you see the prices of add-ons displaying as 0 in the cart page or it does not display at all if(!function_exists('woo_discount_rules_has_price_override_method')){ function woo_discount_rules_has_price_override_method($hprice, $product){ return true; } } add_filter('woo_discount_rules_has_price_override', 'woo_discount_rules_has_price_override_method', 10, 2); Snippet 2: If you does not want the add-on prices to be considered in the discount rule AND you want the discount apply only on the main product's price. Click the button below :)
https://docs.flycart.org/en/articles/2282107-compatibility-woocommerce-product-add-ons
2019-12-05T18:04:12
CC-MAIN-2019-51
1575540481281.1
[]
docs.flycart.org
"alias" Module Description This module allows the server administrator to define custom channel commands (e.g. !kick) and server commands (e.g. /OPERSERV). Configuration To load this module use the following <module> tag: <module name="m_alias.so"> <alias> The <alias> tag defines a custom channel or server command. This tag can be defined as many times as required. The replacement field can contain any of the following template variables: Example Usage Defines an oper-only /OPERSERV server command that messages the OperServ client if it is on a U-lined server: <alias text="OPERSERV" replace="PRIVMSG OperServ :$2-" format="*" requires="OperServ" channelcommand="no" usercommand="yes" operonly="yes" uline="yes"> <fantasy> The <fantasy> tag defines settings about custom channel commands. This tag can only be defined once. Example Usage <fantasy allowbots="no" prefix="."> Special Notes If you are using services you may find it useful to use one of the predefined alias files which ship with InspIRCd. If you are using Anope for your services add the following tag to your configuration: <include file="conf/examples/aliases/anope.conf.example"> If you are using Atheme for your services add the following tag to your configuration: <include file="conf/examples/aliases/atheme.conf.example"> If you are using a system-wide installation you will need to use an absolute path to these files.
https://docs.inspircd.org/2/modules/alias/
2019-12-05T18:17:19
CC-MAIN-2019-51
1575540481281.1
[]
docs.inspircd.org
# How do I ensure my Course Completion Certificates print correctly? > [!Alert] Please be aware that not all functionality covered in this and linked articles may be available to you. When you [create a Course Completion Certificate Template](create-completion-certificates.md) in the TMS, it is very important to follow best practices and to preview the certificate in our system to ensure it will print correctly and placeholder text is replaced with actual data. When Word documents are converted to PDF, fonts and images may not render the same as in Word. ## Best Practices for Fonts Not all fonts are supported by PDF documents. Therefore, fonts may be substituted upon conversion. When this occurs, the resultant text may be cut off, distorted, very large/small or unreadable. It is best to use what is commonly referred to as “base” or “standard” fonts: - Times (or Times New Roman) in regular, bold, italic, and bold italic - Helvetica (or Arial) in regular, bold, italic, and bold italic - Courier (again, same four versions) - Symbol - Zapf Dingbats These are the most likely to not be changed upon conversion. If you want to use other fonts, be sure to [preview](#certificate-preview-prior-to-use) your certificate and test it in PDF before using it for a class or course assignment. ## Best Practices and Tips for Images Converting from Word to PDF could result in a few issues: image overlap or grainy, out of focus image. **Image Overlap or Partial Image**: Transparency may be ignored in the PDF. If this occurs, try resizing or slightly adjust the layout of the images in Word so they do not overlap. **Grainy, Out of Focus Image**: Word may compress images upon ‘Save’. This will change the resolution of the image and cause it to not render clearly in a PDF certificate. To keep this from occurring in Word: 1. Replace the image in the Word template file with the original image. Do not save. 1. Access **Word Options** (File > Options). 1. Click **Advanced** in the left navigation. 1. Scroll down to **Image Size and Quality**. 1. Ensure your document is listed in the dropdown. 1. Select **Do not compress images in file** and select the default resolution for your images. Higher ppi images render better but make the file larger. 1. Save the file and upload it to the **Course Completion Certificate Template** profile in the TMS. You should [preview](#certificate-preview-prior-to-use) your certificate after the changes. ## Certificate Preview Prior to Use It is very important each time you upload a file to your Course Completion Certificate Template profile to preview it after you have save it. This will provide you with a zipped file containing a test Word certificate and PDF certificate. Open these to verify the appearance and accuracy of your certificate in both formats. To view how your certificate will render: 1. On the **Course Completion Certificate Template** profile page, click **Preview**. 1. Fill out test data for any of the fields your template contains. 1. Click **Preview**. 1. Save the zip file to your computer. 1. In the zip file, open both the **Word** and **PDF**. 1. Verify the fonts are readable, the images rendered correctly, and the data correctly replaced the placeholder text. If anything is wrong, refer to the sections ([Fonts](#best-practices-for-fonts), [Images](#best-practices-and-tips-for-images), [Data Replacement](#data-replacement-tip)) of this article to troubleshoot them. ## Data Replacement Tip If the test data you entered did not replace the placeholder text in the test files the Preview tool generated, the actual data will not be present when the certificate is used with a course. To correct this issue, in your template file on your computer, try the following: 1. Delete the entire placeholder value. 1. Type it in again. 1. Save the file 1. Upload the edited file into the **Course Completion Certificate Template** profile in the TMS. 1. Repeat the [preview](#certificate-preview-prior-to-use). > [!Alert]**.
https://docs.learnondemandsystems.com/tms/tms-administrators/miscellaneous/ensure-completion-certificates-print-correctly.md
2019-12-05T18:26:41
CC-MAIN-2019-51
1575540481281.1
[array(['/tms/images/word-options.png', None], dtype=object)]
docs.learnondemandsystems.com
# Can I rearrange items on the Site Administration page? The Site Administration page can be customized. Related topics are grouped together on tiles to help you navigate quickly. You can rearrange the tiles so the items you work with the most are at the top. To do this: 1. Click and hold on a tile you want to move. 1. Drag it to the spot you want it. The other tiles will move to make room. Now every time you access the Admin page, you new organization is there until you decide to change it again.
https://docs.learnondemandsystems.com/tms/tms-administrators/tms-fundamentals/rearrange-items-on-site-administration.md
2019-12-05T18:25:20
CC-MAIN-2019-51
1575540481281.1
[]
docs.learnondemandsystems.com
ResourceSet Constructor (Stream)] Creates a new instance of a ResourceSet class that reads resources from the given stream. Namespace: System.Resources Assembly: mscorlib (in mscorlib.dll) Syntax 'Declaration <SecurityCriticalAttribute> _ Public Sub New ( _ stream As Stream _ ) [SecurityCriticalAttribute] public ResourceSet( Stream stream ) Parameters - stream Type: System.IO.Stream The Stream of resources to be read. The stream should refer to an existing resources file. Version Information Silverlight Supported in:
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/55wx2cc2%28v%3Dvs.95%29
2019-12-05T17:34:08
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
API Overview In addition to an instant marketplace for ERC721-based items, OpenSea provides an HTTP API for fetching non-fungible ERC721 assets based on a set of query parameters. Monitoring every ERC721 contract and caching metadata for each individual token can be a lot of overhead for wallets and websites that wish to display all of a user's collectibles, gaming items, and other assets. By aggregating this data in an easy-to-consume API, we make it easy for wallets and other sites. Here are a couple of products that have integrated the OpenSea API: Coinbase Wallet 3Box Opera Wallet Trust Wallet Editional Balance CryptoGoods ToknTalk CKBox Vault Wallet Bitski Portis Amberdata Go Wallet Pillar Wallet ReceiptChain imToken Rainbow We provide this API free of charge, but ask in return that you give credit to OpenSea on your site, and link to the OpenSea marketplace from the assets you display (where appropriate). Please see our Logos & Brand Guidelines for images that you can use to credit OpenSea. This API is rate-limited. If you'd like to use it in a production environment, request an API key here. Asset Object The: token_id The token ID of the ERC721 asset image_url An image for the item background_color The background color to be displayed with the item name Name of the item external_link External link to the original website for the item asset_contract Dictionary of data on the contract itself (see asset contract section) owner Dictionary of data on the owner (see account section) traits A list of traits associated with the item (see traits section) last_sale When this item was last sold (null if there was no last sale) Traits are special properties on the item, that can either be numbers or strings. Below is an example of how OpenSea displays the traits for a specific item. Here are some of the fields contained in a trait: trait_type The name of the trait (for example color) value The value of this trait (can be a string or number) display_type How this trait will be displayed (options are number, boost_percentage, boost_number). See the adding metadata section for more details Asset contracts contain data about the contract itself, such as the CryptoKitties contract or the CryptoFighters contract. Here are the field associated with an asset contract: address Address of the asset contract name Name of the asset contract symbol Symbol, such as CKITTY image_url Image associated with the asset contract description Description of the asset contract external_link Link to the original website for this contract Event Object Asset events represent state changes that occur for assets. This includes putting them on sale, bidding on them, selling, them, cancelling sales, composing assets, transferring them, and more. Account Object Accounts represent wallet addresses and associated usernames, if the owner entered one on OpenSea. Here's an overview of the fields contained in an account: address The Ethereum wallet address that uniquely identifies this account. profile_img_url An auto-generated profile picture to use for this wallet address. To get the user's Ethmoji avatar, use the Ethmoji SDK. user An object containing username, a string for the the OpenSea username associated with the account. Will be null if the account owner has not yet set a username on OpenSea. config A string representing public configuration options on the user's account, including affiliate and affiliate_requested for OpenSea affiliates and users waiting to be accepted as affiliates. Retrieving assets To retrieve assets from our API, call the /assets endpoint with the desired filter parameters. Note: sorting by listing_date or current_price will filter out assets that are not on sale, along with assets being sold on an escrow contract (where the true owner doesn't own the asset anymore). To sort assets by their escrowed auction price, use current_escrow_price for your order_by parameter. You'll need to do this to display and sort auctions from the CryptoKitties contract, for example. Auctions created on OpenSea don't use an escrow contract, which enables gas-free auctions and allows users to retain ownership of their items while they're on sale. So this is just a heads up in case you notice some assets from opensea.io not appearing in the API. The endpoint will return the following fields: Retrieving bundles Bundles are groups of items for sale on OpenSea. You can buy them all at once in one transaction, and you can create them without any transactions or gas, as long as you've already approved the assets inside. Retrieving a single asset To retrieve an individual from our API, call the /asset endpoint with the address of the asset's contract and the token id. The endpoint will return an Asset Object. Retrieving events The /events endpoint provides a list of events that occur on the assets that OpenSea tracks. The "event_type" field indicates what type of event it is (transfer, successful auction, etc). The endpoint will return the following fields: Retrieving accounts The /accounts endpoint provides a list of accounts that OpenSea tracks. The endpoint will return the following fields: Retrieving collections Use this endpoint to fetch collections and dapps that OpenSea shows on opensea.io, along with dapps and smart contracts that a particular user cares about. The /collections endpoint provides a list of all the collections supported and vetted by OpenSea. To include all collections relevant to a user (including non-whitelisted ones), set the owner param. Each collection in the returned area - all in one API call! Retrieving contracts (deprecated) Use the collections endpoint above instead! Contracts have been grouped into collections, one per dapp. The /asset_contracts endpoint provides a list of all the asset contracts supported by OpenSea. Each asset_contract in the returned area follows the schema of the previously-described asset_contract. You can also use this endpoint to find which dapps an account uses, and how many items they own in each - all in one API call! Getting Started with the Orderbook This page will help you get started with OpenSea Orderbook. You can use this endpoint to query the OpenSea orderbook for buy orders (bids and offers) and sell orders (auctions, listings, and bundles). If you're using JavaScript, you can also use the SDK (), or fork the starter project, pictured below. The orderbook. The OpenSea orderbook, in dapp form. Terminology What are orderbooks and how do they work? An orderbook is just a list of orders that an exchange uses to record the interest of buyers and sellers. On OpenSea, most actions are off-chain, meaning they generate orders that are stored in the orderbook and can be fulfilled by a matching order from another user. exchange the item for payment. This intent is stored in the OpenSea orderbook orderbook. In all other scenarios, only the first order (the "maker" order) is stored.. Rinkeby API Overview The.
https://docs.opensea.io/reference
2019-12-05T16:43:59
CC-MAIN-2019-51
1575540481281.1
[array(['https://files.readme.io/a2bec61-Screen_Shot_2019-02-18_at_10.57.37_PM.png', 'The OpenSea orderbook, in dapp form. https://ships-log.herokuapp.com/'], dtype=object) ]
docs.opensea.io
[−][src]Crate serde Serde Serde is a framework for serializing and deserializing Rust data structures efficiently and generically. The Serde ecosystem consists of data structures that know how to serialize and deserialize themselves along with data formats that know how to serialize and deserialize other things. Serde provides the layer by which these two groups interact with each other, allowing any supported data structure to be serialized and deserialized using any supported data format. See the Serde website for additional documentation and usage examples. Design Where many other languages rely on runtime reflection for serializing data, Serde is instead built on Rust's powerful trait system. A data structure that knows how to serialize and deserialize itself is one that implements Serde's Serialize and Deserialize traits (or uses Serde's derive attribute to automatically generate implementations at compile time). This avoids any overhead of reflection or runtime type information. In fact in many situations the interaction between data structure and data format can be completely optimized away by the Rust compiler, leaving Serde serialization to perform the same speed as a handwritten serializer for the specific selection of data structure and data format. Data formats The following is a partial list of data formats that have been implemented for Serde by the community. - JSON, the ubiquitous JavaScript Object Notation used by many HTTP APIs. - Bincode, a compact binary format used for IPC within the Servo rendering engine. - CBOR, a Concise Binary Object Representation designed for small message size without the need for version negotiation. - YAML, a popular human-friendly configuration language that ain't markup language. - MessagePack, an efficient binary format that resembles a compact JSON. - TOML, a minimal configuration format used by Cargo. - Pickle, a format common in the Python world. - RON, a Rusty Object Notation. - BSON, the data storage and network transfer format used by MongoDB. - Avro, a binary format used within Apache Hadoop, with support for schema definition. - JSON5, A superset of JSON including some productions from ES5. - URL, the x-www-form-urlencoded format. - Envy, a way to deserialize environment variables into Rust structs. (deserialization only) - Envy Store, a way to deserialize AWS Parameter Store parameters into Rust structs. (deserialization only)
https://docs.rs/serde/1.0.94/serde/
2019-12-05T18:20:49
CC-MAIN-2019-51
1575540481281.1
[]
docs.rs
. Server shortcuts can appear on multiple pages and you can swipe across pages to see more shortcuts. Horizon Client creates new pages, as needed, to accommodate all of your server shortcuts. Procedure - On the Servers tab, touch and hold the server shortcut until the context menu appears. - Use the context menu to delete the server or edit the server name, server description, or user name. You can also remove a credential that was saved for fingerprint authentication by tapping Remove Credential.
https://docs.vmware.com/en/VMware-Horizon-Client-for-Android/4.1/com.vmware.horizon.android-client-41-doc/GUID-EEEA756D-D827-467D-AB94-FB3D7F251C6D.html
2018-07-15T23:32:31
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
Search path expansion¶ Note Search path expansion is a very powerful feature. It can be abused to defeat NGLess’ reproducibility mechanisms and to obsfucate which reference information is being used. However, if used correctly, it can greatly simplify file management and enhance reproducibility. NGLess supports a search path system to find references. Certain functions (such as map()) support search path expansion. For example, you can write: map(input, fafile="<>/my-reference.fa") Then if the search path consists of "/opt/ngless-references/", the expanded version will be "/opt/ngless-references/my-reference.fa". ## Named and unnamed search paths You can have named and unnamed paths in your search path. The rules are a bit complex (see below), but it makes sense if you see examples: map(input, fafile="<references>/my-reference.fa") With the search path ['references=/opt/ngless-refs'] will result in '/opt/ngless-refs/my-reference.fa'. With the search path ['internal=/opt/ngless-internal', 'references=/opt/ngless-refs'] will also result in '/opt/ngless-refs/my-reference.fa' as the internal path will not be matched. With the search path ['internal=/opt/ngless-internal', 'references=/opt/ngless-refs', '/opt/ngless-all'] now it will result in ['/opt/ngless-refs/my-reference.fa', '/opt/ngless-all/my-reference.fa'] as the unnamed path will always match. Since there is more than one result, both are checked (in order). Using <> (as in the example above) will use only unnamed paths. ## Setting the search path The search path can be passed on the command line: ngless script.ngl --search-path "references=/opt/ngless" Alternatively, you can set it on the ngless configuration file: search-path = ["references=/opt/ngless"] Note that the search path is a list, even if it contains a single element. ## Rules - If a path matches <([^>]*)>, then it is path expanded. - The search path (which is a list of named and unnamed search paths) if filter. A path is kept on the list if it is an unnamed paht or if the name matches the requested pattern ( <references>requests “references”; <>never matches so that only unnamed paths are kept). - Paths are tested in order and the first path referring to an existing file is kept. Similarly
http://ngless.readthedocs.io/en/latest/searchpath.html
2018-07-15T22:55:43
CC-MAIN-2018-30
1531676589022.38
[]
ngless.readthedocs.io
Figuur 16.34. Example of creating a border from a selection An image with a selection After “Select Border” Thecommand creates a new selection along the edge of an existing selection in the current image. The edge of the current selection is used as a form and the new selection is then created around it. You enter the width of the border, in pixels or some other unit, in the dialog window. Half of the new border lies inside of the selected area and half outside of it.. Figuur 16.36. Select border with and without “Lock to image edges” Select border without (middle) and with (right) locked selection. Same selections filled with red.
https://docs.gimp.org/nl/gimp-selection-border.html
2018-07-15T22:48:23
CC-MAIN-2018-30
1531676589022.38
[]
docs.gimp.org
Polynomials¶ Polynom. Transition notice¶. - Polynomial Package - Using the Convenience Classes - Polynomial Module ( numpy.polynomial.polynomial) - Chebyshev Module ( numpy.polynomial.chebyshev) - Legendre Module ( numpy.polynomial.legendre) - Laguerre Module ( numpy.polynomial.laguerre) - Hermite Module, “Physicists’” ( numpy.polynomial.hermite) - HermiteE Module, “Probabilists’” ( numpy.polynomial.hermite_e) - Polyutils
https://docs.scipy.org/doc/numpy/reference/routines.polynomials.html
2018-07-15T23:06:59
CC-MAIN-2018-30
1531676589022.38
[]
docs.scipy.org
Writing a layer by example In this document, we will be writing a charm layer that installs and configures the Vanilla forum software. Vanilla is an open source, themeable, pluggable, and multi-lingual forum software, which enables communities to host a forum for discussion topics at scale. Powered by PHP and MySQL, Vanilla is a fine example of a three-tiered application: - Database (MySQL) - Middleware (PHP App) - Load Balancing via HTTP interface Prepare your workspace Building off of $JUJU_REPOSITORY, we want to add two more environment variables to your session. We recommend adding these into your shell configuration file so that they are always available. LAYER_PATH JUJU_REPOSITORY=$HOME/charms export LAYER_PATH=$JUJU_REPOSITORY/layers export INTERFACE_PATH=$JUJU_REPOSITORY/interfaces mkdir -p $LAYER_PATH $LAYER_PATH/vanilla cd $LAYER_PATH/vanilla Note: Exporting the environment variables in this way only sets the variables for the current terminal. If you wish to make these changes persist, add the same export statements to a resource file that are evaluated when you create a new console such as ~/.bashrc depending on your shell. Charm Tools Charm Tools is add-on software that is necessary for charm building. See Charm Tools for information on installation and usage. layer subprocess import call) call('chmod -R 777 /var/www/vanilla/conf'.split(), shell=False) flag,'s worth noting that there is a file for each layer in the reactive directory. This allows the handlers for each layer to remain separate and not conflict. All handlers from each of those files will be discovered and dispatched according to the discovery and dispatch rules. Building your charm Now that the layer is done, let's build it together and deploy the final charm. From within the layer directory, this is as simple as: charm build . Build will take all of the layers, looking first in your local LAYER_PATH and then querying interfaces.juju.solutions, $JUJU_REPOSITORY/trusty/vanilla juju add-relation mysql vanilla juju expose vanilla
https://docs.jujucharms.com/2.2/en/developer-layer-example
2018-07-15T22:53:19
CC-MAIN-2018-30
1531676589022.38
[array(['./media/author-charm-composing-01.png', 'directory tree'], dtype=object) ]
docs.jujucharms.com
UpdateAssessmentTarget Updates the assessment target that is specified by the ARN of the assessment target. If resourceGroupArn is not specified, all EC2 instances in the current AWS account and region are included in the assessment target. Request Syntax { "assessmentTargetArn": " string", "assessmentTargetName": " string", "resourceGroupArn": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - assessmentTargetArn The ARN of the assessment target that you want to update. Type: String Length Constraints: Minimum length of 1. Maximum length of 300. Required: Yes - assessmentTargetName The name of the assessment target that you want to update. Type: String Length Constraints: Minimum length of 1. Maximum length of 140. Required: Yes - resourceGroupArn The ARN of the resource group that is used to specify the new resource group to associate with the assessment target. Type: String Length Constraints: Minimum length of 1. Maximum length of 300.: 206 X-Amz-Target: InspectorService.UpdateAssessmentTarget X-Amz-Date: 20160331T185748TargetName": "Example", "resourceGroupArn": "arn:aws:inspector:us-west-2:123456789012:resourcegroup/0-yNbgL5Pt" } Sample Response HTTP/1.1 200 OK x-amzn-RequestId: 76bc43e7-f772-11e5-a5f3-fb6257e71620 Content-Type: application/x-amz-json-1.1 Content-Length: 0 Date: Thu, 31 Mar 2016 18:57:49 GMT See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/inspector/latest/APIReference/API_UpdateAssessmentTarget.html
2018-07-15T23:19:35
CC-MAIN-2018-30
1531676589022.38
[]
docs.aws.amazon.com
An Act to repeal 16.856, 19.36 (12), 84.062, 84.41 (3), 106.04, 111.322 (2m) (c) and 946.15; to amend 19.36 (3), 59.20 (3) (a), 66.0903 (1) (c), 66.0903 (1) (f), 66.0903 (1) (g), 66.0903 (1) (j), 103.503 (1) (a), 103.503 (1) (c), 103.503 (1) (e), 103.503 (1) (g), 103.503 (2), 103.503 (3) (a) 2., 109.09 (1), 111.322 (2m) (d), 230.13 (1) (intro.), 233.13 (intro.) and 978.05 (6) (a); and to create 103.503 (1) (fm) and 103.503 (1) (j) of the statutes; Relating to: elimination of the requirement that laborers, workers, mechanics, and truck drivers employed on the site of a project of public works be paid the prevailing wage. (FE)
https://docs.legis.wisconsin.gov/2017/proposals/ab296
2018-07-15T23:03:25
CC-MAIN-2018-30
1531676589022.38
[]
docs.legis.wisconsin.gov
Use custom policies in Microsoft Intune to allow and block apps for Samsung Knox Standard devices Use the procedure in this article to create a Microsoft Intune custom policy that creates one of the following: - A list of apps that are blocked from running on the device. Apps in this list are blocked from being run, even if they were already installed when the policy was applied. - A list of apps that users of the device are allowed to install from the Google Play store. Only the apps you list can be installed. No other apps can be installed from the store. These settings can only be used by devices that run Samsung Knox Standard. Create an allowed or blocked app list Choose All services > Intune. Intune is located in the Monitoring + Management section. On the Intune pane, choose Device configuration. On the Device configuration pane, choose Manage > Profiles. In the list of profiles pane, choose Create profile. On the Create profile pane, enter a Name and optional Description for this device profile. Choose a Platform of Android, and a Profile type of Custom. Click Settings. On the Custom OMA-URI Settings pane, choose Add. In the Add or Edit OMA-URI Setting dialog box, specify the following settings: For a list of apps that are blocked from running on the device: - Name - Enter PreventStartPackages. - Description - Enter an optional description like 'List of apps that are blocked from running.' - Data type - From the drop-down list, choose String. - OMA-URI - Enter ./Vendor/MSFT/PolicyManager/My/ApplicationManagement/PreventStartPackages - Value - Enter a list of the app package names you want to allow. You can use ; : , or | as a delimiter. (Example: package1;package2;) For a list of apps that users are allowed to install from the Google Play store while excluding all other apps: - Name - Enter AllowInstallPackages. - Description - Enter an optional description like 'List of apps that users can install from Google Play.' - Data type - From the drop-down list, choose String. - OMA-URI - Enter ./Vendor/MSFT/PolicyManager/My/ApplicationManagement/AllowInstallPackages - Value - Enter a list of the app package names you want to allow. You can use ; : , or | as a delimiter. (Example: package1;package2;) Click OK, and then, on the Create Profile pane, choose Create.. The next time each targeted device checks in, the app settings will be applied.
https://docs.microsoft.com/en-us/intune/samsung-knox-apps-allow-block
2018-07-15T22:59:10
CC-MAIN-2018-30
1531676589022.38
[]
docs.microsoft.com
ParagraphPropertiesChange Class Defines the ParagraphPropertiesChange Class.When the object is serialized out as xml, its qualified name is w:pPrChange. Inheritance Hierarchy System.Object DocumentFormat.OpenXml.OpenXmlElement DocumentFormat.OpenXml.OpenXmlCompositeElement DocumentFormat.OpenXml.Wordprocessing.ParagraphPropertiesChange Namespace: DocumentFormat.OpenXml.Wordprocessing Assembly: DocumentFormat.OpenXml (in DocumentFormat.OpenXml.dll) Syntax 'Declaration <ChildElementInfoAttribute(GetType(ParagraphPropertiesExtended))> _ Public Class ParagraphPropertiesChange _ Inherits OpenXmlCompositeElement 'Usage Dim instance As ParagraphPropertiesChange [ChildElementInfoAttribute(typeof(ParagraphPropertiesExtended))] public class ParagraphPropertiesChange : OpenXmlCompositeElement Remarks The following table lists the possible child types: - ParagraphPropertiesExtended <w:pPr> [ISO/IEC 29500-1 1st Edition] 17.13.2.29 pPrChange (Revision Information for Paragraph Properties) This element specifies the details about a single revision to a set of paragraph properties in a WordprocessingML document. This element stores this revision as follows: The child element of this element contains the complete set of paragraph properties which were applied to this paragraph before this revision The attributes of this element contain information about when this revision took place (i.e. when these properties became a 'former' set of paragraph properties). [Example: Consider a paragraph in a WordprocessingML document which is centered, and this change in the paragraph properties is tracked as a revision. This revision would be specified using the following WordprocessingML markup: <w:pPr> <w:jc w: <w:pPrChange w: <w:pPr/> </w:pPrChange> </w:pPr> The pPrChange element specifies that there was a revision to the paragraph properties at 01-01-2006 by John Doe, and the previous set of paragraph properties on the paragraph were the null set (i.e. no paragraph properties explicitly present under the pPr element). end example] [Note: The W3C XML Schema definition of this element’s content model (CT_PPrChange) is located in §A.1. end note] © ISO/IEC29500: 2008. Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. See Also Reference ParagraphPropertiesChange Members DocumentFormat.OpenXml.Wordprocessing Namespace
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2010/cc846789(v=office.14)
2018-07-15T23:36:31
CC-MAIN-2018-30
1531676589022.38
[]
docs.microsoft.com
Migration Requirements This article explains the steps for preparing your environment to work with the Dynamic Host Configuration Protocol (DHCP) Server service for Windows® Essential Business Server. These steps are necessary regardless of whether you chose to start the DHCP Server service during the Windows EBS Management Server installation. If you did not have the DHCP Server service in your environment prior to installing Windows EBS, this article explains how to start using the DHCP Server service to manage IP addresses. If you are unfamiliar with the DHCP Server service, read Background Information later in this document before you continue. Preparing for the migration To ensure optimal performance and reliability, you need to migrate your DHCP Server service to Windows EBS as soon as installation and DNS migration are complete. You should not perform this migration before you migrate the DNS role. For instructions about how to migrate DNS to Windows EBS, see the Microsoft Web site (). Important If you are migrating from Windows Small Business Server 2003 (Windows SBS), you must complete this migration and decommission your Windows SBS server within seven days of installing Windows EBS. You can extend this grace period to 21 days by installing a software update for Windows SBS 2003 that supports the “join domain” migration of Windows SBS data and settings. For additional instructions about how to migrate from Windows SBS to Windows EBS, see the Microsoft Web site (). If your existing DHCP server is running the Microsoft® Windows NT® 4.0 or Microsoft® Windows® 2000 Server operating system, the migration process requires you to temporarily install the DHCP server role on a server that is running Windows Server® 2003. (This temporary server must not already be a DHCP server.) This temporary server is needed to help migrate the scopes and settings from your existing DHCP server to the Management Server. Time estimate You will need approximately one hour to complete this task (two hours if your existing DHCP server runs Windows 2000 Server). The time needed depends on the number of clients with static IP addresses. It is recommended that you perform this migration during a time when network usage is low (such as an evening or a weekend), because if there is an issue during the migration, some computers may experience network disconnections. Decision flowchart Study the following flowchart to determine which step-by-step instructions you should start with. It is recommended that you read all the sections before you start the migration. If you are unsure how to answer a question in the flowchart, read How to Answer Questions in the Decision Flowchart later in this document. Figure 1 Decision flowchart Migration overview The following table provides an overview of what will be migrated. If something goes wrong If something goes wrong with this migration, you can reactivate your existing DHCP server to restore network connectivity while you troubleshoot the issue.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-essentials-server-2008/cc463560(v=ws.10)
2018-07-15T23:21:41
CC-MAIN-2018-30
1531676589022.38
[array(['images/cc463560.ad9682f8-4afe-4153-b47b-3e18312175fb%28ws.10%29.gif', 'Flowchart Flowchart'], dtype=object) ]
docs.microsoft.com
Last Updated: 08 June 2017 These release notes include the following topics: Key Features VMware Horizon Client for Windows 10 UWP makes it easy to access your Windows remote desktop and remote applications using the VMware Blast display protocol for the best possible user experience on the Local Area Network (LAN) or across a Wide Area Network (WAN). - Work the way you want to - Use your Windows 10 tablet or smartphone to work on a Windows-based remote desktop from any location. Support for the VMware Blast display protocol means that your desktop is fast and responsive, regardless of where you are. - Simple connectivity - Horizon Client is tightly integrated with Horizon 7 for simple setup and connectivity. - Multitasking - Switch between Horizon Client and other apps without losing a remote desktop or over Wi-Fi or 3G. Your remote desktop is delivered securely to you wherever you are. Enhanced certificate checking is performed on the client. The client app also supports RSA SecurID authentication. What's New in This Release - Windows 10 Hello authentication You can use Windows 10 Hello to authenticate to a Connection Server instance in Horizon Client for Windows 10 UWP. This feature requires that biometric authentication is enabled in the Connection Server instance. For information about enabling biometric authentication, see the View Administration document. - Clipboard redirection support You can copy and plain text from your Windows 10 client system to a remote desktop or application. If a Horizon administrator enables the feature, you can also copy and paste plain text from a remote desktop or application to your client system or between two remote desktops or applications. - XBox One support You can install Horizon Client for Windows 10 UWP on XBox One. - Windows 10 Creators Update support You can install Horizon Client for Windows 10 UWP on a Windows 10 Creators Update machine. - Support for Workspace ONE Mode Beginning with Horizon 7 version 7.2, a Horizon administrator can enable Workspace ONE mode for a Connection Server instance. If you connect to a Workspace ONE mode enabled server in Horizon Client for Windows 10 UWP, you are redirected to the Workspace ONE portal to launch your entitled desktops and applications. Using VMware Horizon Client for Windows 10 UWP. - Horizon Client for Windows 10 UWP is supported with the latest maintenance release of Horizon View 6.x and later. - To install Horizon Client for Windows 10 UWP, open the Microsoft Store app on your device, search for the VMware Horizon Client app, and click Install or Free to download the app to your device. - For complete installation and configuration information, see Using VMware Horizon Client for Windows 10 UWP..
https://docs.vmware.com/en/VMware-Horizon-Client-for-Windows-10-UWP/4.5/rn/horizon-client-windows-10uwp-45-release-notes.html
2018-07-15T23:33:18
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
Testing the Riak CS Installation Installing & Configuring s3cmd Installation The simplest way to test the installation is using the s3cmd script. We can install it on Ubuntu by typing: sudo apt-get -y install s3cmd For our OS X users, either use the package manager of your preference or download the S3 cmd package at. You will need to extract the .tar file, change directories into the folder, and build the package. The process should look something like this: tar -xvzf s3cmd-1.5.0-alpha1.tar.gz cd s3cmd-1.5.0-alpha1 sudo python setup.py install You will be prompted to enter your system password. Enter it and then wait for the installation to complete. Configuration We need to configure s3cmd to use our Riak CS server rather than S3 as well as our user keys. To do that interactively, type the following: s3cmd -c ~/.s3cfgfasttrack --configure If you are already using s3cmd on your local machine, the -c switch allows you to specify a .s3cfg file without overwriting anything you may have presently configured. There are 4 default settings you should change: - Access Key — Use the Riak CS user access key you generated above. - Secret Key — Use the Riak CS user secret key you generated above. - Proxy Server — Use your Riak CS IP. If you followed the Virtual environment configuration, use localhost. - Proxy Port — The default Riak CS port is 8080. You should have copied your Access Key and Secret Key from the prior installation steps. Interacting with Riak CS via S3cmd Once s3cmd is configured, we can use it to create a test bucket: s3cmd -c ~/.s3cfgfasttrack mb s3://test-bucket We can see if it was created by typing: s3cmd -c ~/.s3cfgfasttrack ls We can now upload a test file to that bucket: dd if=/dev/zero of=test_file bs=1m count=2 # Create a test file s3cmd -c ~/.s3cfgfasttrack put test_file s3://test-bucket We can see if it was properly uploaded by typing: s3cmd -c ~/.s3cfgfasttrack ls s3://test-bucket We can now download the test file: # remove the local test file rm test_file # download from Riak CS s3cmd -c ~/.s3cfgfasttrack get s3://test-bucket/test_file # verify that the download was successful ls -lah test_file What's Next If you have made it this far, congratulations! You now have a working Riak CS test instance (either virtual or local). There is still a fair bit of learning to be done, so make sure and check out the Reference section (click “Reference” on the nav on the left side of this page). A few items that may be of particular interest:
http://docs.basho.com/riakcs/latest/tutorials/fast-track/Testing-the-Installation/
2014-04-16T07:41:08
CC-MAIN-2014-15
1397609521558.37
[]
docs.basho.com
To add a recurring class simply click on the planner at the space selected.
https://docs.influxhq.com/planner/recurring-classes
2020-07-02T18:42:57
CC-MAIN-2020-29
1593655879738.16
[]
docs.influxhq.com
Web services are loosely defined as the use of Internet technologies to make distributed software components talk to each other without human intervention. The software components might perform such business logic as getting a stock quote, searching the inventory of a catalog on the Internet, or integrating the reservation services for an airline and a car rental agency. You can reach across the Internet and use preexisting components, instead of having to write them for your application. A PowerBuilder application can act as a client consuming a Web service that is accessed through the Internet. Through use of SOAP and WSDL, a collection of functions published remotely as a single entity can become part of your PowerBuilder application. A Web service accepts and responds to requests sent by applications or other Web services. For more information about Web services, see Building a Web Services Client (Obsolete)
https://docs.appeon.com/pb2017r3/application_techniques/ch27s03.html
2020-07-02T19:38:13
CC-MAIN-2020-29
1593655879738.16
[]
docs.appeon.com
Getting Started: Tools Tools The core tools are: hh_client: this is the command line interface for Hack's static analysis; it is needed to verify that a project is valid Hack, and is used to find errors in your programs hhvm: this is used to execute your Hack code, and can either be used for CLI (e.g. hhvm foo.hack) or as a server, and has extensive documentation Editors and IDEs You can use any plain-text editor to edit Hack files - however, enhanced integration with Hack is available for several editors. We primarily recommend using Visual Studio Code with the VSCode-Hack extension; this provides standard IDE-like features such as syntax highlighting, go-to-definition, and in-line display of Hack errors. For Vim users, vim-hack provides syntax highlighting and language detection, and the ALE project provides enhanced support for Hack. hack-mode provides a major mode for Emacs users. If you use a different editor or IDE with LSP support, configure it to use hh_client lsp; if you use HHAST, you might want to configure it to use vendor/bin/hhast-lint --mode lsp, but keep in mind this will lead to your editor automatically executing code from a project when that project is opened; for this reason, the ALE integration has HHAST disabled by default, and Visual Studio Code prompts to confirm before executing it. Dependency Management Hack dependencies are currently managed using Composer, which must be executed with PHP. Composer can be thought of as an equivalent to npm or yarn. Other Common Tools hackfmtis a CLI code formatter included with HHVM and Hack, and is also used by the various editor and IDE integrations - HHAST is added provides code style linting, and the ability to automatically modify code to adapt to some changes in the language or libraries - hacktest and fbexpect are commonly used together for writing unit tests
https://docs.hhvm.com/hack/getting-started/tools
2020-07-02T19:42:41
CC-MAIN-2020-29
1593655879738.16
[]
docs.hhvm.com
This topic demonstrates how to run the Hello Query Device sample application, which queries Inference Engine devices and prints their metrics and default configuration values. The sample shows how to use Query Device API feature. NOTE: This topic describes usage of C++ implementation of the Query Device Sample. For the Python* implementation, refer to Hello Query Device Python* Sample To see quired information, run the following: The application prints all available devices with their supported metrics and default values for configuration parameters:
https://docs.openvinotoolkit.org/latest/_inference_engine_samples_hello_query_device_README.html
2020-07-02T20:09:09
CC-MAIN-2020-29
1593655879738.16
[]
docs.openvinotoolkit.org
on Data on the top navigation bar. Click on the name of the worksheet you want to edit. Click the Edit button in the upper right hand side of the screen. Make your changes to the worksheet. Click the the ellipses icon (3 dots) ,. Click the current name, and enter a new name.
https://docs.thoughtspot.com/5.0/admin/worksheets/edit-worksheet.html
2020-07-02T19:10:30
CC-MAIN-2020-29
1593655879738.16
[]
docs.thoughtspot.com
Graph Visualization¶ Overview¶ Toyplot now includes support for visualizing graphs - in the mathematical sense of vertices connected by edges - using the toyplot.coordinates.Cartesian.graph() and toyplot.graph() functions. As we will see, graph visualizations combine many of the aspects and properties of line plots (for drawing the edges), scatterplots (for drawing the vertices), and text (for drawing labels). At a minimum, a graph can be specified as a collection of edges. For example, consider a trivial social network: [1]: sources = ["Tim", "Tim", "Fred", "Janet"] targets = ["Fred", "Janet", "Janet", "Pam"] … here, we have specified a sequence of source (start) vertices and target (end) vertices for each edge in the graph, which we can pass directly to Toyplot for rendering: [2]: import toyplot toyplot.graph(sources, targets, width=300); Simple as it is, Toyplot had to perform many steps to arrive at this figure: - We specified a set of edges as input, and Toyplot induced a set of unique vertices from them. - Used a layout algorithm to calculate coordinates for each vertex. - Rendered the vertices. - Rendered a set of vertex labels. - Rendered an edge (line) between each pair of connected vertices. We will examine each of these concepts in depth over the course of this guide. Inputs¶ At a minimum, you must specify the edges in a graph to create a visualization. In the above example, we specified a sequence of edge sources and a sequence of edge targets. We could also specify the edges as a numpy matrix (2D array) containing a column of sources and a column of targets: [3]: import numpy edges = numpy.array([["Tim", "Fred"], ["Tim", "Janet"], ["Fred", "Janet"], ["Janet", "Pam"]]) toyplot.graph(edges, width=300); In either case, Toyplot creates (induces) vertices using the edge source / target values. Specifically, the source / target values are used as vertex identifiers, with a vertex created for each unique identifier. Note that vertex identifiers don’t have to be strings, as in the following example: [4]: edges = numpy.array([[0, 1], [0, 2], [1, 2], [2, 3]]) toyplot.graph(edges, width=300); Inducing vertices from edge data is sufficient for many problems, but there may be occaisions when your graph contains disconnected vertices without any edge connections. For this case, you may specify an optional collection of extra vertex identifiers to add to your graph: [5]: extra_vertices=[10] toyplot.graph(edges, extra_vertices, width=300); Layout Algorithms¶ The next step in rendering a graph is using a layout algorithm to determine the locations of the vertices and routing of edges. Graph layout is an active area of research and there are many competing ideas about what constitutes a good layout, so Toyplot provides a variety of layouts to meet individual needs. By default, graphs are layed-out using the classic force-directed layout of Fruchterman and Reingold: [6]: import docs edges = docs.barabasi_albert_graph() toyplot.graph(edges, width=500); To explicitly specify the layout, use the :mod: toyplot.layout module: [7]: import toyplot.layout layout = toyplot.layout.FruchtermanReingold() toyplot.graph(edges, layout=layout, width=500); Note that by default most layouts produce straight-line edges, but this can be overridden by supplying an alternate edge-layout algorithm: [8]: layout = toyplot.layout.FruchtermanReingold(edges=toyplot.layout.CurvedEdges()) toyplot.graph(edges, layout=layout, width=500); If your graph is a tree, there are also tree-specific layouts to choose from: [9]: numpy.random.seed(1234) edges = docs.prufer_tree(numpy.random.choice(4, 12)) layout = toyplot.layout.Buchheim() toyplot.graph(edges, layout=layout, width=500, height=200); When computing a layout, Toyplot doesn’t have to compute the coordinates for every vertex … you can explicitly specify some or all of the coordinates yourself. To do so, you can pass a matrix containing X and Y coordinates for the vertices you want to control, that is masked everywhere. Suppose we rendered our tree with the default force directed layout: [10]: toyplot.graph(edges, width=500); … but we want to force vertices 0, 1, and 3 to lie on the X axis: [11]: vcoordinates = numpy.ma.masked_all((14, 2)) # We know in advance there are 14 vertices vcoordinates[0] = (-1, 0) vcoordinates[1] = (0, 0) vcoordinates[3] = (1, 0) toyplot.graph(edges, vcoordinates=vcoordinates, width=500); Note that we’ve “pinned” our three vertices of interest, and the layout algorithm has placed the other vertices around them as normal. This is particularly useful when there are vertices of special significance that we wish to place explicitly, either to steer the layout, or to work with a narrative flow. Keep in mind that we aren’t limited to explicitly constraining both coordinates for a vertex. For example, if we had some other per-vertex variable that we wanted to use for the visualization, we might map it to the X axis: [12]: numpy.random.seed(1234) data = numpy.random.uniform(0, 1, size=14) vcoordinates = numpy.ma.masked_all((14, 2)) vcoordinates[:,0] = data canvas, axes, mark = toyplot.graph(edges, vcoordinates=vcoordinates, width=500) axes.show = True axes.aspect = None axes.y.show = False Now, the X coordinate of every vertex is constrained, while the force-directed layout places just the Y coordinates. Vertex Rendering¶ As you might expect, you can treat graph vertices as a single series of markers for rendering purposes. For example, you could specify a custom vertex color, marker, size, and label style: [13]: edges = docs.barabasi_albert_graph(n=10) layout = toyplot.layout.FruchtermanReingold() #layout = toyplot.layout.FruchtermanReingold(edges=toyplot.layout.CurvedEdges()) vlstyle = {"fill":"white"} toyplot.graph(edges, layout=layout, vcolor="steelblue", vmarker="d", vsize=30, vlstyle=vlstyle, width=500); Of course, you can assign a \([0, N)\) colormap to the vertices based on their index, or some other variable: [14]: colormap = toyplot.color.LinearMap(toyplot.color.Palette(["white", "yellow", "red"])) vstyle = {"stroke":toyplot.color.black} toyplot.graph(edges, layout=layout, vcolor=colormap, vsize=30, vstyle=vstyle, width=500); Edge Rendering¶ Much like vertices, there are color, width, and style controls for edges: [15]: estyle = {"stroke-dasharray":"3,3"} toyplot.graph( edges, layout=layout, ecolor="black", ewidth=3, eopacity=0.4, estyle=estyle, vcolor=colormap, vsize=30, vstyle=vstyle, width=500, ); Edges can also be rendered with per-edge head, middle, and tail markers. For example, if you wish to show the directionality of the edges in a graph, it is customary to add an arrow at the end of each edge: [16]: toyplot.graph( edges, layout=layout,", vcolor=colormap, vsize=30, vstyle=vstyle, width=500, ); Of course, you are free to use any of the properties that are available to control the marker appearance: [17]: toyplot.graph( edges, layout=layout,", mstyle={"fill":"white"}), vcolor=colormap, vsize=30, vstyle=vstyle, width=500, ); You might also want to place markers at the beginning of each edge: [18]: toyplot.graph( edges, layout=layout,", mstyle={"fill":"white"}), vcolor=colormap, vsize=30, vstyle=vstyle, width=500, ); Or you might want to mark the middle of an edge: [19]: toyplot.graph( edges, layout=layout, ecolor="black", mmarker=toyplot.marker.create(shape="r3x1", size=15, label="1.2", mstyle={"fill":"white"}), vcolor=colormap, vsize=30, vstyle=vstyle, width=500, ); Note that markers are aligned with the edge by default, which can make reading text difficult. In many cases you may wish to specify the orientation of each marker as an absolute angle from horizontal: [20]: toyplot.graph( edges, layout=layout, ecolor="black", mmarker=toyplot.marker.create(shape="r3x1", angle=0, size=15, label="1.2", mstyle={"fill":"white"}), vcolor=colormap, vsize=30, vstyle=vstyle, width=500, ); Alternatively, you may wish to specify the orientation of markers relative to their edges: [21]: toyplot.graph( edges, layout=layout, ecolor="black", mmarker=toyplot.marker.create(shape="r3x1", angle="r90", size=15, label="1.2", mstyle={"fill":"white"}), vcolor=colormap, vsize=30, vstyle=vstyle, width=500, );
https://toyplot.readthedocs.io/en/latest/graph-visualization.html
2020-07-02T19:44:58
CC-MAIN-2020-29
1593655879738.16
[]
toyplot.readthedocs.io
The Couchbase Web Console is, by default, available on port 8091. Therefore, if your machine can be identified on the network as servera, you can access the Couchbase Web Console by opening. Alternatively, you can use an IP address or, if you are working on the machine on which installation was performed,. If you have chosen to run Couch. To set up a new cluster, left-click on Setup New Cluster. Set Up a New Cluster The New Cluster screen now appears, as follows: The fields displayed on the screen are: Cluster Name: Your choice of name for the cluster to be created. Create Admin Username: Your choice of username, for yourself: the Full Administrator for this cluster. You will have read-write access to all Couchbase Server resources; including the ability to create new users with defined roles and corresponding privileges. Note that Couchbase Server prohibits use of the following characters in usernames: ( ) < > @ , ; : \ " / [ ] ? = { }. Usernames may not be more than 128 UTF-8 characters in length; and it is recommended that they be no more than 64 UTF-8 characters in length, in order to ensure successful onscreen display. Your choice of password, for yourself: the Full Administrator for this cluster. The only default format-requirement is that the password be at least 6 characters in length. However, following cluster-initialization, you can modify (and indeed strengthen) the default password-policy, by means of the Couchbase CLI setting-password-policycommand. When you have entered appropriate data into each field, left-click on the Next: Accept Terms button, at the lower right. Accept Terms The New Cluster screen now changes, to show the Terms and Conditions for the Enterprise Edition of Couchbase Server: Check the I accept the terms & conditions checkbox. Then, to register for updates, left-click on the right-facing arrowhead, adjacent to the Register for updates notification. The screen now expands vertically, as follows: To receive updates, fill out the four newly displayed fields with your first and last name, company-name, and email-address. Provided that the current node is connected to the internet, the Couchbase Server version-numbers corresponding to each node in your cluster will be anonymously sent to Couchbase: this information is used by Couchbase over time, to provide you with appropriate updates, and to help with product-improvement. Your email-address will be added to the Couchbase community mailing-list, so that you can periodically receive Couchbase news and product-information. (You can unsubscribe from the mailing-list at any time using the Unsubscribe link, provided in each newsletter.) You now have two options for proceeding. If you left-click on the Finish With Defaults button, cluster-initialization is performed with default settings, provided by Couchbase; the Couchbase Web Console Dashboard appears, and your configuration is complete. However, if you wish to customize those settings, left-click on the Configure Disk, Memory, Services button, and proceed as follows. Configure Couchbase Server The Configure screen now appears, as follows: The displayed fields are: Host Name/IP Address: Enter the hostname or IP address for the machine on which you are configuring Couchbase Server. Data Disk Path: Enter the location on the current node where the database files will be stored. Memory Quotas: A series of fields that allows you to specify how much memory should be allocated to each service you select for both the current node and for each node you may subsequently add to the cluster. Each service can be selected by checking a checkbox, and then specifying the total number of megabytes to be assigned to the service. In each case, a memory quota is suggested, and a minimum quota is required. The sum of all quotas must be within the total amount of available RAM for the current node. Data Service: Since you are starting a new cluster, the Data service (which is essential for the cluster) has been allocated with its checkbox disabled. If this is a development system, you may add up to three services. Note that on a production system, it is recommended that only one service ever be allocated per node. Index Service: Selection and RAM-allocation to support Global Secondary Indexes. This should be 256 MB or more. If this service is selected, a default quota is provided. Search Service: Selection and RAM-allocation for the Full Text Service. This should be 256 MB or more. If this service is selected, a default quota is provided. Query. Index Storage Setting: If the Index Service has been selected, either Standard Global Secondary Indexes or Memory-Optimized Global Secondary Indexes can be chosen here, by means of radio buttons. See Global Secondary Indexes (GSIs), for details. When you have finished entering your configuration-details, left-click on the Save & Finish button, at the lower right. This configures the server accordingly, and brings up the Couchbase Web Console Dashboard, for the first time. New-Cluster Set-Up: Next Steps If this is the first server in the cluster, a notification appears, stating that no buckets are currently defined. A bucket is the principal unit of data-storage used by Couchbase Server. In order to save and subsequently access documents and other objects, you must create one or more buckets. As specified by the notification, you can go to Buckets, and begin bucket-creation; or add a sample bucket: left-click on the links provided. A description of how to create, edit, flush, and delete buckets can be found in the section Setting Up Buckets. An architectural description of buckets can be found in the section Buckets. (There are three different kinds of bucket, so you may wish to familiarize yourself with their properties, before you start bucket-creation.) Note that sample buckets already contain data, and so are ready for your immediate experimentation and testing. The buckets that you create must be accessed securely: therefore, Couchbase Server provides a system of Role-Based Access Control (RBAC), which must be used by administrators and applications that wish to access buckets. Each administrator and application is considered to be a user, and must perform bucket-access by passing a username and password. For information on how to set up RBAC users so that they can access the buckets you create, see Authorization.'
https://docs.couchbase.com/server/5.0/install/init-setup.html
2020-07-02T18:53:05
CC-MAIN-2020-29
1593655879738.16
[array(['_images/admin/welcome.png', 'welcome'], dtype=object) array(['_images/admin/setUpNewCluster01.png', 'setUpNewCluster01'], dtype=object) array(['_images/admin/TsAndCs01.png', 'TsAndCs01'], dtype=object) array(['_images/admin/registerForUpdates01.png', 'registerForUpdates01'], dtype=object) array(['_images/admin/configureNewCluster01.png', 'configureNewCluster01'], dtype=object) array(['_images/admin/dashboard01.png', 'dashboard01'], dtype=object) array(['_images/admin/joinClusterInitial.png', 'joinClusterInitial'], dtype=object) array(['_images/admin/joinWithCustomConfig.png', 'joinWithCustomConfig'], dtype=object) array(['_images/admin/joinClusterServiceCheckboxes.png', 'joinClusterServiceCheckboxes'], dtype=object) array(['_images/admin/joinExistingNewServiceSettings.png', 'joinExistingNewServiceSettings'], dtype=object) array(['_images/admin/joinClusterMemoryQuotaSaved.png', 'joinClusterMemoryQuotaSaved'], dtype=object) ]
docs.couchbase.com
Crate gc See all gc's items Thread-local garbage-collected boxes (The Gc<T> type). Gc<T> The Gc<T> type provides shared ownership of an immutable value. It is marked as non-sendable because the garbage collection only occurs thread-locally. This rule implements the trace method. This rule implements the trace methods with empty implementations. A garbage-collected pointer type over an immutable value. A mutable memory location with dynamically checked borrow rules that can be used inside of a garbage-collected pointer. A wrapper type for an immutably borrowed value from a GcCell<T>. GcCell<T> A wrapper type for a mutably borrowed value from a GcCell<T>. The Finalize trait. Can be specialized for a specific type to define finalization logic for that type. The Trace trait, which needs to be implemented on garbage-collected objects. Immediately triggers a garbage collection on the current thread.
https://docs.rs/gc/0.3.3/x86_64-pc-windows-msvc/gc/index.html
2020-07-02T19:16:50
CC-MAIN-2020-29
1593655879738.16
[]
docs.rs
Manage Apps Distribution Disable the landing page If you do not want a landing page you can disable it on the landing page settings page for that specific app: Disabling the page will not stop the distribution of the app since it still appears in the testers dashboard and is still alive in the system. If you want to stop distribution of your app you will need to do one of 2 actions: Disable distribution In the build Settings go to App Distribution and change it to Disabled and press save changes. You will see a message confirming the new settings have been changed: And If you go to the build overview you will see another message that the build is expired and testers will not be able to install this build: PLEASE NOTE: Once the build is disabled the app will not appear in the testers dashboard but Testers who already installed the app will be able to continue using it. Deleting the build from the dashboard If you want to delete a build from the system go to the builds App Overview menu: On the left column select the checkbox of the build you want to delete. You can choose one, many or all the builds. Once you select the build, in the bottom actions drop down choose Delete (n) builds: Confirm the deletion: A message confirming the duild was deleted will be displayed. Last updated on 2020-05-31
https://docs.testfairy.com/App_Distribution/Manage_Apps_Distribution.html
2020-07-02T17:46:39
CC-MAIN-2020-29
1593655879738.16
[array(['/img/landing-pages-on-off.png', 'dissable landing page'], dtype=object) array(['/img/app_distribution/dissable-dist-build.png', None], dtype=object) array(['/img/app_distribution/app-dist-save-sucsess.png', None], dtype=object) array(['/img/app_distribution/build-invalid.png', None], dtype=object) array(['/img/app_distribution/select-builds.png', None], dtype=object) array(['/img/app_distribution/delet-builds.png', None], dtype=object) array(['/img/app_distribution/confirm-delete.png', None], dtype=object)]
docs.testfairy.com
At the heart of the Sentori system is the database of contacts. Contacts can be: Sentori will never send an email message to those contacts who are suppressed or who are marked as having a bad address. Sentori will make a new contact record for each unique email address. It is not possible for two (or more) contacts to have the same email address. This means that contact records are 'deduplicated' using the email address when new records are added. The minimum information that is required to create a contact record is a person's email address but to get the most from Sentori it's a good idea to include more data such as the person's name and contact data. Sentori allows you to include any piece of data that you want on the contact record. New accounts are set up automatically with some basic contact fields but you can add contact fields. You can add details of your contacts one-by-one but typically Sentori users load data in bulk by importing a file. You can download contact data at any time. Every time you see a number (indicating a number of contacts) you can click on that number and export your contacts. Sentori offers a number of different ways to catagorise, segment manage your contact data. These features allow you to send targeted email messages. The options are:
http://docs.sentoriapp.com/contacts
2020-07-02T17:52:10
CC-MAIN-2020-29
1593655879738.16
[]
docs.sentoriapp.com
Class: Aws::Comprehend::Types::StartSentimentDetectionJobResponse - Defined in: - gems/aws-sdk-comprehend/lib/aws-sdk-comprehend/types.rb Overview Constant Summary collapse - SENSITIVE = [] Instance Attribute Summary collapse - #job_id ⇒ String The identifier generated for the job. - #job_status ⇒ String The status of the job. Instance Attribute Details #job_id ⇒ String The identifier generated for the job. To get the status of a job, use this identifier with the operation. #job_status ⇒.
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Comprehend/Types/StartSentimentDetectionJobResponse.html
2020-07-02T19:08:15
CC-MAIN-2020-29
1593655879738.16
[]
docs.aws.amazon.com
? To view all the Guest Accounts or find a specific account, click on the Sales button on the Home window, select Sales from the BookingCenter menu or Ctrl + 0 to display the Sales window Note The easiest way to add sale items to an account is through the Bookings window, but if you sell a 'Cash Sale' to a walk-in, use the Sales window and issue a 'New Sales', avoding the need to make a booking. All Sale Items are listed in this section of the window.
https://docs.bookingcenter.com/plugins/viewsource/viewpagesrc.action?pageId=3642486
2020-07-02T18:23:46
CC-MAIN-2020-29
1593655879738.16
[]
docs.bookingcenter.com
Db Context. On Model Creating(ModelBuilder) Method Definition Override this method to further configure the model that was discovered by convention from the entity types exposed in DbSet<TEntity> properties on your derived context. The resulting model may be cached and re-used for subsequent instances of your derived context. protected internal virtual void OnModelCreating (Microsoft.EntityFrameworkCore.ModelBuilder modelBuilder); abstract member OnModelCreating : Microsoft.EntityFrameworkCore.ModelBuilder -> unit override this.OnModelCreating : Microsoft.EntityFrameworkCore.ModelBuilder -> unit Parameters - modelBuilder - ModelBuilder The builder being used to construct the model for this context. Databases (and other extensions) typically define extension methods on this object that allow you to configure aspects of the model that are specific to a given database. Remarks If a model is explicitly set on the options for this context (via UseModel(IModel)) then this method will not be run.
https://docs.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.dbcontext.onmodelcreating?view=efcore-3.1
2020-07-02T20:01:39
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
Prepare a clean desk and some small boxes to unzip the package. Take a picture of the kit contents in case you lost something later. It’s better to work in a room without carpet or textured mosaic. Little screws and springs can magically hide themselves if dropped onto the ground.
https://docs.petoi.com/chapter1
2020-07-02T19:05:16
CC-MAIN-2020-29
1593655879738.16
[]
docs.petoi.com
Product Introduction SoftNAS™ Cloud is a network attached storage (NAS) virtual appliance. Our products are commercial-grade storage management solutions for businesses that require high-speed, reliable storage at affordable prices. ... - Cloud computing platforms such as Amazon EC2®, VMware vCloud® Air™, and Microsoft® Azure™ - On-premise Computing Infrastructure such as VMware vSphere®. Architecture and Technology., check the All Platforms section. We also provide a platform-centric approach to our documentation. To search by platform (such as AWS, Azure, or VMware) click the relevant link below:
https://docs.softnas.com/pages/diffpages.action?pageId=8454658&originalId=1671203
2020-07-02T19:06:21
CC-MAIN-2020-29
1593655879738.16
[]
docs.softnas.com
PowerBuilder provides a feature called the data pipeline that you can use to migrate data between database tables. This feature makes it possible to copy rows from one or more source tables to a new or existing destination table -- either within a database, or across databases, or even across DBMSs. Two ways to use data pipelines You can take advantage of data pipelines in two different ways: As a utility service for developers While working in the PowerBuilder development environment, you might occasionally want to migrate data for logistical reasons (such as to create a small test table from a large production table). In this case, you can use the Data Pipeline painter interactively to perform the migration immediately. For more information on using the Data Pipeline painter this way, see Working with Data Pipelines in Users Guide. To implement data migration capabilities in an application If you are building an application whose requirements call for migrating data between tables, you can design an appropriate data pipeline in the Data Pipeline painter, save it, and then enable users to execute it from within the application. This technique can be useful in many different situations, such as: when you want the application to download local copies of tables from a database server to a remote user, or when you want it to roll up data from individual transaction tables to a master transaction table. Walking through the basic steps If you determine that you need to use a data pipeline in your application, you must determine what steps this involves. At the most general level, there are five basic steps that you typically have to perform. To pipe data in an application: Build the objects you need. Perform some initial housekeeping. Start the pipeline. Handle row errors. Perform some final housekeeping. The remainder of this chapter gives you the details of each step.
https://docs.appeon.com/pb2017r3/application_techniques/ch17s01.html
2020-07-02T18:23:53
CC-MAIN-2020-29
1593655879738.16
[]
docs.appeon.com
Actual kit contents and packaging method may be adjusted as we improve the product. This instruction will keep consistent with the current namespace. There might be some tar residue on the wooden pieces from laser cutting. Use a wet soft tissue to clean up the board. The functional pieces are attached to the baseboard by lightly cut tabs. Though you could pop those pieces out by hand, it’s very highly recommended that you use a knife to cut on the back. Use the sanding foam to clean up any thorn on the pieces. Don’t sand too much or it may affect the tightness between joints.. We are switching to a new servo manufacturer from recent batches. Previously, MG92B were used for the four shoulder joints. MG90D were used for other joints. The new servos are differentiated by their cable length. Shorter cables are used for the neck, tail and the four shoulder joints. Longer cables are used for head tilting, and the four knee joints. For hobbyist servos, there are several fields where they can differentiate. In Nybble kit, we are using ODMed metal gear, digital PWM, HV servos with bearing and brushed iron core motors. Other generic servos can still work with OpenCat framework, but may need more trials and errors for best performance.
https://docs.petoi.com/chapter2
2020-07-02T19:17:36
CC-MAIN-2020-29
1593655879738.16
[]
docs.petoi.com
Thank you again for purchasing the EduBright theme. We hope that you will be satisfied with our work and it will please you. If you find anything that are beyond the scope of this help page then please feel free to send us email to [email protected]. Support requests are being processed 24/7. Attention! Please note that we cannot provide the technical support until you specify your Item Purchase Code. Help us to build the perfect product! Please share your ideas and thoughts by sending an email to [email protected]. We will review and try to add new features in the theme.
https://www.docs.envytheme.com/docs/edubright-documentation/support/
2020-07-02T18:46:32
CC-MAIN-2020-29
1593655879738.16
[]
www.docs.envytheme.com
Crate elma Library for reading and writing Elasto Mania files. Read and write Elasto Mania level files. Read and write Elasto Mania replay files. Shared position struct used in both sub-modules. General errors. Diameter of player head. Radius of player head. Diameter of objects (and wheels). Radius of objects (and wheels). Pads a string with null bytes. Converts the string-as-i32 times in top10 list to strings. Trims trailing bytes after and including null byte.
https://docs.rs/elma/0.1.3/elma/
2017-11-17T21:29:27
CC-MAIN-2017-47
1510934803944.17
[]
docs.rs
Use: 1607 (Anniversary Update) Professional and Enterprise x86 and x64 No specific hardware other than the system requirements of the installed applications is required for Application Profiler.
https://docs.vmware.com/en/VMware-User-Environment-Manager/9.1/com.vmware.user.environment.manager-app-profiler/GUID-E6F7986E-908E-4EA7-9E14-D7BEB1D1DACA.html
2017-11-17T21:57:01
CC-MAIN-2017-47
1510934803944.17
[]
docs.vmware.com
Blueprint information settings control who can access a blueprint, how many machines they can provision with it, and how long to archive a machine after the lease period is over. Prerequisites Log in to the vRealize Automation console as a tenant administrator or business group manager. Gather the following information from your fabric administrator: Note:. Your fabric administrator might have provided this information in a build profile. Procedure - Select and select the type of blueprint you are creating. - Select and select the type of blueprint you are creating. - Enter a name in the Name text box. - (Optional) : Enter a description in the Description text box. - (Optional) : Select the Master check box to allow users to copy your blueprint. - (Optional) : Select the Display location on request check box to prompt users to choose a datacenter location when they submit a machine request. This option requires additional configuration to add datacenter locations and associate compute resources with those locations. -. - Specify the number of days to archive machines provisioned from this blueprint in the Archive (days) text box. Enter 0 if you do not want to archive machines. - (Optional) : Set the daily cost of the machine by typing the amount in the Cost (daily) text box. This cost is added to any cost profiles that your fabric administrator sets up. Results Your blueprint is not finished. Do not navigate away from this page.
https://docs.vmware.com/en/vRealize-Automation/6.2/com.vmware.vra.iaas.virtual.doc/GUID-EE9CF4F1-966F-4A9C-A407-AA119B7788E3.html
2017-11-17T21:57:16
CC-MAIN-2017-47
1510934803944.17
[]
docs.vmware.com
Factory function to open a FITS file and return an HDUList object. Create a new FITS file using the supplied data/header. Print the summary information on a FITS file. This includes the name, type, length of header, data shape and type for each extension. Append the header/data to FITS file if filename exists, create if not. If only data is supplied, a minimal header is created. Update the specified extension with the input data/header. Get the data from an extension of a FITS file (and optionally the header). Get the header from an extension of a FITS file. Get a keyword’s value from a header in a FITS file. Set a keyword’s value from a header in a FITS file. If the keyword already exists, it’s value/comment will be updated. If it does not exist, a new card will be created and it will be placed before or after the specified location. If no before or after is specified, it will be appended at the end. When updating more than one keyword in a file, this convenience function is a much less efficient approach compared with opening the file for update, modifying the header, and closing the file. Delete all instances of keyword from a header in a FITS file.
https://astropy.readthedocs.io/en/v0.2.5/io/fits/api/files.html
2017-11-17T21:12:32
CC-MAIN-2017-47
1510934803944.17
[]
astropy.readthedocs.io
Running on Platform-as-a-Service¶ QMachine is easy to deploy using Platform-as-a-service (PaaS) because its design was driven by the goal to be as far “above the metal” as possible. One-click deployment to Heroku¶ In fact, QM is so far above the metal that it can be deployed to Heroku with a single click. If you’re reading this in a digital format like HTML or PDF, you can do it without even leaving this page. It’s okay if you’re leery of clicking things, if your computer’s security settings blocked the coolness, or if you’re using a hard copy of the manual. The idea here appears again and again in QM: a workflow can be launched simply by loading a URL. In this case, your click sends a message to Heroku to create a new app from a template in a version-controlled repository,. This template contains the “blueprint” for a turnkey QM system, complete with an API server, a web server, and a barebones webpage that loads the browser client. It uses the Ruby version of QM for simplicity and the Heroku Button for convenience.
https://docs.qmachine.org/en/latest/paas-sandbox.html
2017-11-17T20:52:23
CC-MAIN-2017-47
1510934803944.17
[]
docs.qmachine.org
Properties provide the means of accessing various types of information regarding a message that passes through the ESB. You can also use properties to control the behavior of the ESB on a given message. Following are the types of properties you can use: - Generic Properties: Allow you to configure messages as they're processed by the ESB, such as marking a message as out-only (no response message will be expected), adding a custom error message or code to the message, and disabling WS-Addressing headers. -. - SOAP Headers: Provide information about the message, such as the To and From values. - Axis2 Properties: Allow you to configure the web services engine in the ESB, such as specifying how to cache JMS objects, setting the minimum and maximum threads for consuming messages, and forcing outgoing HTTP/S messages to use HTTP 1.0. - Synapse Message Context Properties: Allow you to get information about the message, such as the date/time it was sent, the message format, and the message operation. For many properties, you can use the Property mediator to retrieve and set the properties. Additionally, see Accessing Properties with XPath for information on the XPath extension functions and Synapse XPath variables you can use. Overview Content Tools Activity
https://docs.wso2.com/display/ESB500/Properties+Reference
2017-11-17T21:11:41
CC-MAIN-2017-47
1510934803944.17
[array(['http://b.content.wso2.com/sites/all/images/zusybot-hide-img.png', None], dtype=object) ]
docs.wso2.com
Transform Manager is a tool located within Maltego to help with the addition of transform application servers (TAS) as well as the configuration of transforms from those servers and sets (groupings of transforms). Clicking the Manage Transforms button will open the Transform Manager Window which is split between three tabs. Namely, All Transforms, Transform Servers and Transform Sets. All Transforms Transforms can be edited from the default Transform Manager window (see above). From this window, you can sort transforms by: - Transform: The name of the transform. - Status: Whether the transform is ‘ready’ or has requirements such as a disclaimer or input that needs to be set. - Location: The Transform Application Servers (TAS) that this transform is found on. - Default Set: The default set this transform can be found in. - Input: The input entity type (what you click on to run this transform). - Output: The output entity type(s) (What is returned after running this transform). This window can also be searched via the control at the top right which will search the transform names column: With the default layout of the Transform Manager the following sections are also available: - Transform Information (bottom left): This section describes the transform, gives additional transform information such as transform author and informs of any user action needed, such as accepting disclaimers or if additional settings are needed. - Transform Settings (bottom right): This section allows the modification of transform specific settings such as API keys, timeouts, setting fields to popup and so on. Transform Servers The Transform Servers tab displays the servers that are available to you which you can easily turn on and off to set if they are used. This is useful when you have multiple servers and would prefer not to specify every time you run a transform which server it should be run on. You can also view transforms on specific servers by expanding each server with the (+) icon, as seen below: Transform Sets Sets are a way of grouping transforms that are commonly run together. With the default installation of Maltego you will notice various sets have been preconfigured for you, such as the Resolve to IP set which groups the transforms that convert DNSName, MX Record, NS Record and Website Entities to IP addresses. This has been done so that instead of having to select each individual entity type you can run a set of transforms on them. Creating a Set To create a new set simply select the New Set... button within the Set Manager and fill in the Set Name and a Description for the set (optional). Editing Sets To add or remove transforms from a set, start by selecting the set you wish to modify from the list of available sets within the right-hand pane and then drag the transform from the left-hand pane over it. To add more than one transform to the set simply select multiple transforms by using either the shift or Ctrl modifiers and then drag the selection onto the set. Alternatively, you can simply select the transforms you wish to add, right-click on them and use the Add to Set-> context menu and select the set you wish to use. To remove specific transforms to a set, select the transforms that you wish to remove within the selected set, right-click and select Remove from set. Deleting a Set To permanently delete a set, select the set from the right-hand pane, right-click on it and click Delete.... You will then be given a dialog to confirm that you wish to delete the set: Selecting OK on this dialog will delete the set permanently.
https://docs.maltego.com/support/solutions/articles/15000010779-manage-transforms
2019-01-16T06:41:04
CC-MAIN-2019-04
1547583656897.10
[array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510507/original/Iww1ssZtMje7jY2vxXYtcK-lTjVicrPmVg.png?1528984083', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510540/original/OOqFRw4uS7hS83M1BeX9FOBhn2EYppcLfA.png?1528984123', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510594/original/v8wMUJuFda73doPIPzgTTg2EXu0Nl-g7yQ.png?1528984235', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510713/original/LzwXX33PpRClxM0MKBI6Us3T_bwQ1l3z5w.png?1528984405', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510739/original/K2afozo4-Gsqjwi30SP7wDUxPvuPZNghqA.png?1528984443', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510758/original/Iii5binH_LyFU-B3WsMZJ5VysZ57zQn6pw.png?1528984474', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510794/original/AbB9dP5VDvnWruUZJxJGt1uwXyhhB29Fwg.png?1528984526', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510849/original/ReRtBO-wSN7VGqvVD7ydSynnij0F7_n7Dw.png?1528984615', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004510897/original/yhGWYZAqvR0pyKcbTei6HzVeFxlTgAS_2A.png?1528984706', None], dtype=object) ]
docs.maltego.com
Run Method (Windows Script Host) Runs a program in a new process.. Example 1) Reference Exec Method (Windows Script Host)
https://docs.microsoft.com/en-us/previous-versions/d5fk67ky(v=vs.85)
2019-01-16T06:30:31
CC-MAIN-2019-04
1547583656897.10
[]
docs.microsoft.com
Transform Meta Info Description This transform queries a public PGP key server and asks the question - "show me all the email addresses that ends in the supplied domain name - results are returned as email address entities. Keep in mind that this information might be outdated. The transform is useful for finding email addresses at a domain - an added bonus is that we know these people communicate encrypted to others. The transform will query the following PGP key servers:
https://docs.maltego.com/support/solutions/articles/15000019132-to-email-addresses-pgp-
2019-01-16T06:26:35
CC-MAIN-2019-04
1547583656897.10
[]
docs.maltego.com
Themes¶ The Themes tab allows you to customize interface appearance and colors. The colors for each editor can be set separately by simply selecting the editor you wish to change in the multi-choice list at the left, and adjusting colors as required. Notice that changes appear in real-time on your screen. In addition, details such as the dot size in the 3D View or the Graph Editor can also be changed. Themes use Blender’s preset system to save a theme. This will save the theme to an XML file in the ./scripts/presets/interface_theme/ subdirectory of one of the configuration directories. Blender comes bundled with a small selection of themes. This is an example of the theme Elsyiun.
https://docs.blender.org/manual/en/latest/preferences/themes.html
2019-01-16T05:37:03
CC-MAIN-2019-04
1547583656897.10
[array(['../_images/preferences_themes_tab.png', '../_images/preferences_themes_tab.png'], dtype=object) array(['../_images/preferences_themes_example.png', '../_images/preferences_themes_example.png'], dtype=object)]
docs.blender.org
This page provides information on the Image Filter rollout in the V-Ray tab of the Render Settings. Page Contents Overview. UI Path: ||Render Setup window|| > V-Ray tab > Image filter rollout (When V-Ray Adv is the Production renderer) ||Render Setup window|| > V-Ray RT tab > Image filter rollout (When V-Ray RT is the Production renderer) Parameters. Bucket image sampler was used for the images below. Example: Anti-aliasing Filters and Moire Effects This example demonstrates the effect anti-ali, moire effects: Image Filter off Area size = 1.5 Area size = 4.0 Quadratic Sharp Quadratic Cubic Video Soften size = 6.0 Cook Variable size = 2.5 Blend size = 8.0, blend = 0.3 Blackman Mitchell-Netravali, blur = 0.333, ringing = 0.333 Catmull-Rom
https://docs.chaosgroup.com/display/VRAY3MAX/Image+Filter
2019-01-16T06:45:14
CC-MAIN-2019-04
1547583656897.10
[]
docs.chaosgroup.com
Costs associated with using data on an AWS volume Posted in General by Ian Fore Thu Sep 13 2018 13:34:53 GMT+0000 (UTC)·4·Viewed 132 times I've set up a Volume to access files from a bucket I have under my own AWS account and copied a file into a project. Does this copy incur storage charges of its own? It doesn't incur storage charges of its own. You are importing files from Volume into the project and there is no actual data transfer, so you are not charged additionally. Perfect. Thanks for the prompt answer. Out of curiosity, do / how do regions for storage and compute come into play here? I assume that if a user's data were in a bucket in another region than the compute is occurring - or data from multiple regions are imported into a project - there would be data transfer charges assessed when an analysis is done? If your bucket is on us-east-1 region there won't be any additional data egress charges for computation, as well, otherwise there will be additional charges for file transfer.
https://docs.cancergenomicscloud.org/discuss/5b9a677d56c988000346edf3
2019-01-16T05:39:36
CC-MAIN-2019-04
1547583656897.10
[]
docs.cancergenomicscloud.org
rpmbuildcommand. rpmbuildcommand as a non-root, normal, user. If you build a package as the root user, possible mistakes in the spec file, for example in the %installsection, can cause damage to your system. ~/rpmbuild/SOURCES/directory. ~/rpmbuild/SPECS/directory. ~/rpmbuild/SPECS/directory and run the rpmbuildcommand: cd ~/rpmbuild/SPECS rpmbuild -ba eject.spec error: Failed build dependencies: libtool is needed by eject-2.1.5-0.1.x86_64 yum install -y libtool %installsection, you may want to skip earlier stages of the build process with the --short-circuitoption and restart the build process at the %installstage: rpmbuild -bi --short-circuit eject.spec rpmbuildoutput will be as follows: + exit 0 rpmbuild -ba eject.speccommand, the binary package will be placed in a subdirectory of the ~/rpmbuild/RPMS/directory and the source package will be placed in ~/rpmbuild/SRPMS/. .src.rpm), run the following command: rpmbuild -bs eject.spec ~/rpmbuild/SRPMS/directory, or recreate it if it has been previously created.
https://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/Packagers_Guide/sect-Packagers_Guide-Building_a_Package.html
2019-01-16T06:35:03
CC-MAIN-2019-04
1547583656897.10
[]
docs.fedoraproject.org
When querying for the first time, define the Active Directory/IP address scope, which includes Active Directory objects and IP addresses that the OfficeScan server will query on demand or periodically. After defining the scope, start the query process. To define an Active Directory scope, OfficeScan must first be integrated with Active Directory. For details about the integration, see Active Directory Integration. A new screen opens. For a pure IPv4 OfficeScan server, type an IPv4 address range. For a pure IPv6 OfficeScan server, type an IPv6 prefix and length. For a dual-stack OfficeScan server, type an IPv4 address range and/or IPv6 prefix and length. The IPv6 address range limit is 16 bits, which is similar to the limit for IPv4 address ranges. The prefix length should therefore be between 112 and 128. To view the communication port used by the OfficeScan server, go to Agents > Agent Management and select a domain. The port displays next to the IP address column. Trend Micro recommends keeping a record of port numbers for your reference. Enabling this setting speeds up the query. When connection to endpoints cannot be established, the OfficeScan server no longer needs to perform all the other connection verification tasks before treating endpoints as unreachable. The Outside Server Management screen displays the result of the query. The query may take a long time to complete, especially if the query scope is broad. Do not perform another query until the Outside Server Management screen displays the result. Otherwise, the current query session terminates and the query process restarts.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/managing-the-trend_c/security-compliance/security-compliance-1/defining-the-active-.aspx
2019-01-16T06:24:09
CC-MAIN-2019-04
1547583656897.10
[]
docs.trendmicro.com
Contents Now Platform Custom Business Applications Previous Topic Next Topic Working with client test runners ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Working with client test runners If an automated test includes steps that involve a form or any other user-interface (UI) element, it runs those steps in a browser tab or window called a test runner or client test runner. The Automated Test Framework supports two types of client test runners: Client Test Runners for manually started tests and Scheduled Client Test Runners for tests started by a schedule. When test execution is enabled, clicking the Client Test Runner module opens the client test runner in the current browser session. If tests are waiting to be run, the Client Test Runner runs a waiting test. If no test is to run later. If a test remains waiting for more than ten minutes, the system cancels the test. Test execution propertyTo work with the client test runner module, the test execution property must be enabled. Note: The test execution property is disabled The client test runner takes screenshots as the tests run. For best results with screenshots, leave the browser zoom level set to 100%. Browser recommendations for scheduled suites The client test runners for scheduled suites have additional browser requirements. On OS X with the client test runner on Chrome or Safari: If the screen is locked or the client test runner tab is occluded when the system attempts to run the test suite, tests run significantly slower and may time out. For best performance, run client test runners for scheduled suites in a vm environment in which the screen does not become locked or disabled. The browser must meet the criteria you specified on the Scheduled Suite Run record. A client test runner meeting the criteria you specified on the Scheduled Suite Run record must be available to run the test suite at the scheduled time. The system cannot automatically open a client test-runner session. Javascript window command intercepts The Client Test Runner captures window object commands including console.log, console.error, alert, confirm, and prompt, with default responses where necessary. Any script that calls window.confirm receives a boolean response of true Any script that calls window.prompt receives the string response test value Active test runner tables Active Test Runners tableWhen you start a client test runner, the system registers that runner in the Active Test Runners table. You can view this table in the Active Manual Test Runners module and the Active Scheduled Test Runners module. These two modules provide views of the same table, filtered to show only manual or only scheduled test runners.. Related ReferenceAutomated Test Framework Client Test Runner moduleAutomated Test Framework Scheduled Client Test Runner module On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-application-development/page/administer/auto-test-framework/concept/atf-test-runners.html
2018-12-10T04:57:06
CC-MAIN-2018-51
1544376823303.28
[]
docs.servicenow.com
An Act to amend 20.370 (5) (cx), 23.33 (5m) (title), (a) and (b) (intro.), 23.33 (5m) (b) 2. to 6., 23.33 (5m) (c) (intro.) and 1., 23.33 (5m) (c) 3. to 7. and 23.33 (5m) (d); and to create 23.33 (5m) (e) of the statutes; Relating to: funding for the all-terrain vehicle and utility terrain vehicle safety enhancement program and making an appropriation. (FE)
http://docs.legis.wisconsin.gov/2017/proposals/sb124
2018-12-10T04:13:39
CC-MAIN-2018-51
1544376823303.28
[]
docs.legis.wisconsin.gov
These are the following return codes returned at the end of execution of a CLI command: 0 -- Command was successful. There were no errors thrown by either the CLI or by the service the request was made to. 1 -- Limited to s3 commands, at least one or more s3 transfers failed for the command executed. 2 -- The meaning of this return code depends on the command being run. The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limted to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands. The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from. 130 -- The process received a SIGINT (Ctrl-C). 255 -- Command failed. There were errors thrown by either the CLI or by the service the request was made to. To determine the return code of a command, run the following right after running a CLI command. Note that this will work only on POSIX systems: $ echo $? Output (if successful): 0 On Windows PowerShell, the return code can be determined by running: > echo $lastexitcode Output (if successful): 0 On Windows Command Prompt, the return code can be determined by running: > echo %errorlevel% Output (if successful): 0
https://docs.aws.amazon.com/cli/latest/topic/return-codes.html
2018-12-10T04:35:05
CC-MAIN-2018-51
1544376823303.28
[]
docs.aws.amazon.com
Creating custom dropdowns¶ In eZ Platform you are able to implement custom dropdowns anywhere in the Back Office. Follow the steps below, to learn how to integrate this small component to fit your project's needs. Prepare custom dropdown structure¶ First prepare the HTML code structure in the following way: The highlighted line two in the code above contains a hidden native select input. It stores the selection values. Input is hidden because custom dropdown duplicates its functionality. Caution Do not remove select input. Removal would break the functionality of any submission form. Generate <select> input¶ Next step is generating a standard select input with ez-custom-dropdown__select CSS class added to <select> element. This element should contain at least one additional attribute: hidden. If you want to allow users to pick multiple items from a list, add the multiple attribute to the same element. Add attributes¶ Next look into the data-value attribute in the code above (line 11 and 12) to duplicated options with the CSS class: ez-custom-dropdown__item. It stores a value of an option from a select input. You can provide placeholder text for your custom dropdown. To do so: - put a data-valueattribute with no value data-value="" - add a disabledattribute to the item in the duplicated list of options as shown in the example below. It will make it unclickable. Initialize¶ To initialize custom dropdown run the following JavaScript code: Configuration options¶ Full list of options: In the above code samples you will find 4 of 5 configuration options. Default template HTML code structure for missing selectedItemTemplate looks like this:
https://ez-systems-developer-documentation.readthedocs-hosted.com/en/latest/cookbook/creating_custom_dropdowns/
2018-12-10T03:53:10
CC-MAIN-2018-51
1544376823303.28
[array(['../img/dropdown_expanded_state.jpg', 'Dropdown expanded state'], dtype=object) array(['../img/dropdown_multiple_selection.jpg', 'Dropdown multiple selection'], dtype=object)]
ez-systems-developer-documentation.readthedocs-hosted.com
QtLocation.qtlocation-mapviewer-example This is a large example covering many basic uses of maps, positioning, and navigation services in Qt Location. This page is divided into sections covering each of these areas of functionality with snippets from the code. The Map Viewer Nokia services plugin supplied with Qt requires an app_id and token pair. See "Qt Location Nokia Plugin" for details. QML types shown in this example: - Displaying a map - Map - MapGestureArea - coordinate - Finding an address - Directions and travel routes Displaying a Map Drawing a map on-screen is accomplished using the Map type, as shown below. Map { id: map zoomLevel: (maximumZoomLevel - minimumZoomLevel)/2 center { latitude: -27.5796 longitude: 153.1003 } // Enable pinch gestures to zoom in and out gesture.flickDeceleration: 3000 gesture.enabled: true center { latitude: -27.5796 longitude: 153.1003 } } In this example, we give the map an initial center coordinate with a set latitude and longitude. We also set the initial zoom level to 50% (halfway between the maximum and minimum). The calls to "pinch" and "flick" are used to enable gestures on the map. The flick gesture is also sometimes known as "kinetic panning", and provides a more intuitive feel for panning the map both on touch screens and with a mouse. As we do not specify a plugin for supplying map data, the platform default will be used. This is typically the "nokia" plugin, which provides data from Nokia services. Additional licensing conditions do apply to the use of this data, please see the documentation for further details., which is typically instantiated as a property of the Map component: property GeocodeModel geocodeModel: GeocodeModel { } Then, { radius: 1000 color: circleMouseArea.containsMouse ? "lime" : "red" opacity: 0.6 center: locationData.coordinate } } With these three objects, we have enough to receive Geocode responses and display them on our Map. The final piece is to send the actual Geocode request. In this example, we have a utility component called Dialog which we use to display the user interface requesting geocoding parameters. You can create a similar component yourself using Dialog.qml in this example as a reference, or drive the process using any other UI you wish. To send a geocode request, first we create an Address object, and fill it in with the desired parameters. Then we set "map.geocodeModel.query" to the filled in Address, and call update() on the GeocodeModel. InputDialog { id: geocodeDialog Address { id: geocodeAddress } onGoButtonClicked: { // manage the UI state transitions page.state = "" messageDialog.state = "" // fill out the Address element geocodeAddress.street = dialogModel.get(0).inputText geocodeAddress.city = dialogModel.get(1).inputText geocodeAddress.state = dialogModel.get(2).inputText geocodeAddress.country = dialogModel.get(3).inputText geocodeAddress.postalCode = dialogModel.get(4).inputText // send the geocode request map.geocodeModel.query = geocodeAddress map: property RouteQuery routeQuery: RouteQuery {} property RouteModel routeModel: RouteModel { plugin : map.plugin query: routeQuery } To display the contents of a model to the user, we need a view. Once again we will use a MapItemView, to display the Routes as objects on the Map: MapItemView { model: routeModel delegate: routeDelegate autoFitViewport: true } To act as a template for the objects we wish the view to create, we create a delegate component: Component { id: routeDelegate MapRoute { route: routeData line.color: routeMouseArea.containsMouse ? "lime" : "red", which we store inside the RouteDialog component: RouteDialog { id: routeDialog property variant startCoordinate property variant endCoordinate } In the next snippet, we show how to set up the request object and instruct the model to update. We also instruct the map to center on the start coordinate for our routing request. function calculateRoute() { // clear away any old data in the query map.routeQuery.clearWaypoints(); // add the start and end coords as waypoints on the route map.routeQuery.addWaypoint(startCoordinate) map.routeQuery.addWaypoint(endCoordinate) map.routeQuery.travelModes = routeDialog.travelMode map.routeQuery.routeOptimizations = routeDialog.routeOptimization map.routeModel.update(); // the pull-out on the left-hand side of the map. To create this pull-out's contents, we use a standard ListModel and ListView pair. The data in the ListModel is built from the routeModel's output: ListModel { id: routeInfoModel property string travelTime property string distance function update() { clear() if (routeModel.count > 0) { for (var i = 0; i < routeModel.get(0).segments.length; i++) { append({ "instruction": routeModel.get(0).segments[i].maneuver.instructionText, "distance": formatDistance(routeModel.get(0).segments[i].maneuver.distanceToNextInstruction) }); } } travelTime = routeModel.count == 0 ? "" : formatTime(routeModel.get(0).travelTime) distance = routeModel.count == 0 ? "" : formatDistance(routeModel.get(0).distance) } } Inside the RouteModel, we add an onStatusChanged handler, which calls the update() function we defined on the model: onStatusChanged: { if (status == RouteModel.Ready) { switch (count) { case 0: clearAll() // technically not an error map.routeError() break case 1: routeInfoModel.update() break } } else if (status == RouteModel.Error) { clearAll() map.routeError() } } Files: - mapviewer/mapviewer.qml - mapviewer/qmlmapviewerwrapper.cpp - mapviewer/content/dialogs/Message.qml - mapviewer/content/dialogs/RouteDialog.qml - mapviewer/content/map/3dItem.qml - mapviewer/content/map/CircleItem.qml - mapviewer/content/map/ImageItem.qml - mapviewer/content/map/MapComponent.qml - mapviewer/content/map/Marker.qml - mapviewer/content/map/MiniMap.qml - mapviewer/content/map/PolygonItem.qml - mapviewer/content/map/PolylineItem.qml - mapviewer/content/map/RectangleItem.qml - mapviewer/content/map/VideoItem.qml - mapviewer/mapviewer.pro - mapviewer/mapviewerwrapper.qrc
https://docs.ubuntu.com/phone/en/apps/api-qml-current/QtLocation.qtlocation-mapviewer-example
2018-12-10T05:02:35
CC-MAIN-2018-51
1544376823303.28
[]
docs.ubuntu.com
$ oc <action> <object_type> <object_name>. The developer CLI allows interaction with the various objects that are managed by OpenShift Container Platform. Many common oc operations are invoked using the following syntax: $ oc <action> <object_type> <object_name> This specifies: An <action> to perform, such as get or describe. The <object_type> to perform the action on, such as service. The>] Returns information about the specific object returned by the query. A specific <object_name> must be provided. The actual information that is available varies as described in object type. $ oc describe <object_type> <object_name> Edit the desired object type: $ oc edit <object_type>/<object_name> Edit the desired object type with a specified text editor: $ OC_EDITOR="<text_editor>" oc edit <object_type>/<object_name> Edit the desired object in a specified format (eg: JSON): $ oc edit <object_type>/<object_name> \ --output-version=<object_type_version> \ -o <object_type_format> Look up a service and expose it as a route. There is also the ability to expose a deployment configuration, replication controller, service, or pod as a new service on a specified port. If no labels are specified, the new object will re-use the labels from the object it exposes. If you are exposing a service, the default generator is --generator=route/v1. For all other cases the default is --generator=service/v2, which leaves the port unnamed. Generally, there is no need to set a generator with the oc expose command. A third generator, --generator=service/v1, is available with the port name default. $ oc expose <object_type> <object_name> Delete the specified object. An object configuration can also be passed in through STDIN. The oc delete all -l <label> operation deletes all objects matching the specified <label>, including the replication controller so that pods are not re-created. $ oc delete -f <file_path> $ oc delete <object_type> <object_name> $ oc delete <object_type> -l <label> $ oc delete all -l <label> provides CLI access to inspect and manipulate deployment configurations using standard oc resource operations, such as get, create, and describe. Manually start the build process with the specified build configuration file: $ oc start-build <buildconfig_name> Manually start the build process by specifying the name of a previous build as a starting point: $ oc start-build --from-build=<build_name> Manually start the build process by specifying either a configuration file or the name of a previous build and retrieve its build logs: $ oc start-build --from-build=<build_name> --follow $ oc start-build <buildconfig_name> --follow Wait for a build to complete and exit with a non-zero return code if the build fails: $ oc start-build --from-build=<build_name> --wait Set or override environment variables for the current build without changing the build configuration. Alternatively, use -e. $ oc start-build --env <var_name>=<value> Set or override the default build log level output during the build: $ oc start-build --build-loglevel [0-5] Specify the source code commit identifier the build should use; requires a build based on a Git repository: $ oc start-build --commit=<hash> Re-run build with name <build_name>: $ oc start-build --from-build=<build_name> Archive <dir_name> and build with it as the binary input: $ oc start-build --from-dir=<dir_name> Use> The path to a local source code repository to use as the binary input for a build: $ oc start-build --from-repo=<path_to_repo> Specify a webhook URL for an existing build configuration to trigger: $ oc start-build --from-webhook=<webhook_URL> The contents of the post-receive hook to trigger a build: $ oc start-build --git-post-receive=<contents> The path to the Git repository for post-receive; defaults to the current directory: $ oc start-build --git-repository=<path_to_repo> List the webhooks for the specified build configuration or build; accepts all, generic, or github: $ oc start-build --list-webhooks Create a build configuration based on the source code in the current Git repository (with a public remote) and a container image: $ oc new-build . Stop a build that is in progress: $ oc cancel-build <build_name> Cancel multiple builds at the same time: $ oc cancel-build <build1_name> <build2_name> <build3_name> Cancel all builds created from the build configuration: $ oc cancel-build bc/<buildconfig_name> Specify the builds to be canceled: $ oc cancel-build bc/<buildconfig_name> --state=<state> Example values for state are new or pending. Import tag and image information from an external image repository: $ oc import-image <image_stream> Set the number of desired replicas for a replication controller or a deployment configuration to the number of specified replicas: $ oc scale <object_type> <object_name> --replicas=<#_of_replicas> Parse a configuration file and create one or more OpenShift Container Platform objects based on the file contents. The -f flag can be passed multiple times with different file or directory paths. When the flag is passed multiple times, oc create iterates through each one, creating the objects described in all of the indicated files. Any existing resources are ignored. $ oc create -f <file_or_dir_path> Attempt to modify an existing object based on the contents of the specified configuration file. The -f flag can be passed multiple times with different file or directory paths. When the flag is passed multiple times, oc replace iterates through each one, updating the objects described in all of the indicated files. $ oc replace -f <file_or_dir_path> $ oc process -f <template_file_path> Create and run a particular image, possibly replicated. By default, create a deployment configuration to manage the created container(s). You can choose to create a different resource using the --generator flag: You can choose to run in the foreground for an interactive container execution. $ oc run NAME --image=<image> \ [--generator=<resource>] \ [--port=<port>] \ [--replicas=<replicas>] \ [--dry-run=<bool>] \ [--overrides=<inline_json>] \ [options] Updates one or more fields of an object using strategic merge patch: $ oc patch <object_type> <object_name> -p <changes> The <changes> is a JSON or YAML expression containing the new fields and the values. For example, to update the spec.unschedulable field of the node node1 to the value true, the json expression is: $ oc patch node node1 -p '{"spec":{"unschedulable":true}}'] Retrieve the log output for a specific build, deployment, or pod. This command works for builds, build configurations, deployment configurations, and pods. $ oc logs -f <pod> Execute a command in an already-running container. You can optionally specify a container ID, otherwise it defaults to the first container. $ oc exec <pod> [-c <container>] <command> Copy the contents to or from a directory in an already-running pod container. If you do not specify a container, it defaults to the first container in the pod. To copy contents from a local directory to a directory in a pod: $ oc rsync <local_dir> <pod>:<pod_dir> -c <container> To copy contents from a directory in a pod to a local directory: $ oc rsync <pod>:<pod_dir> <local_dir> -c <container> Forward one or more local ports to a pod: $ oc port-forward <pod> <local_port>:<remote_port>
https://docs.openshift.com/container-platform/3.6/cli_reference/basic_cli_operations.html
2018-12-10T03:49:44
CC-MAIN-2018-51
1544376823303.28
[]
docs.openshift.com
Using Infinity Connect in-call controls The table below shows the actions that can be performed while a call is in progress. Note that this table includes all features available to the Infinity Connect desktop client, the Infinity Connect web app and the Infinity Connect mobile clients for Android and iOS, although not all features are available to all clients.
https://docs.pexip.com/end_user/guide_for_admins/connect_controls_generic.htm
2018-12-10T04:37:46
CC-MAIN-2018-51
1544376823303.28
[]
docs.pexip.com
<generatePublisherEvidence> Element Specifies whether the runtime creates Publisher evidence for code access security (CAS). <configuration> Element <runtime> Element <generatePublisherEvidence> Element <generatePublisherEvidence enabled="true|false"/> Attributes and Elements The following sections describe attributes, child elements, and parent elements. Attributes Enabled Attribute Child Elements None. Parent Elements Remarks> See Also Reference Other Resources Configuration File Schema for the .NET Framework
https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-3.5/bb629393(v=vs.90)
2018-12-10T04:31:19
CC-MAIN-2018-51
1544376823303.28
[]
docs.microsoft.com
WebRTC Media Service Note: Not all changes listed below may pertain to your deployment. April 11, 2019 (9.0.000.37) What's New SIP addresses - WebRTC Media Service now retrieves the SIP address from Genesys Web Services (GWS) version 9 automatically and users are not required to configure the SIP address while provisioning Agent Desktop. The Agent Desktop supported version is 9.0.000.21 and above. December 21, 2018 (9.0.000.27) What's New - WebRTC Media Service now supports OAuth 2.0 authentication and authorization method to validate the user credentials passed from Agent Desktop. The Genesys Softphone compatible version to support OAuth 2.0 is 9.0.004.05 and above and the Agent Desktop version is 9.0.000.17 and above. June 29, 2018 (9.0.000.15) What's New Initial release This is the initial release of WebRTC Media Service on the PureEngage Cloud (PEC) platform. Agents can handle both inbound and outbound voice calls through WebRTC-capable devices like Genesys Softphone by communicating with the PEC platform through the WebRTC Media Service. The WebRTC Media Service supports Genesys Softphone version 9.0.003.04+. The key features of the WebRTC Media Service are: - Supports G.711 and Opus codecs. - Provides real-time media transcoding whenever required. - Supports audio calls only. - Signalling and media encryption capabilities of WebRTC Media Service ensures appropriate security for voice communications over the public network. Known Issues There are currently no known issues. This page was last modified on April 11, 2019, at 10:00. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSAAS/latest/RN/WebRTC
2019-05-19T17:07:50
CC-MAIN-2019-22
1558232255071.27
[]
docs.genesys.com
Snippet for formed order display. Is used on ordering page and for mailing notification to customers. Parameters Another pdoTools generalparameters may be also used Formatting Snippet counts on work with chunk Fenom and transfers 7 variables there: - order - order data array from msOrderobject - products - ordered goods array with all their properties - user - data array of modUserand modUserProfileobjects with all customer's characteristics - address - data array of msAddressobject with delivery data - delivery - array of selected delivery characteristics of msDeliveryobject - payment - array of selected payment characteristics of msPaymentobject - total - totals order array: - cost - total order cost - weight - total order weight - delivery_cost - separate delivery cost - cart_cost - separate ordered goods cost - cart_weight - total weight of the ordered goods - cart_count - ordered goods number Data, transferred when calling snippet, may also be present. Foe example, the variable payment_link may be in new letter formatting chunk Placeholders All available order placeholders may be seen when displaying empty chunk: <pre>[[!msGetOrder?tpl=``]]</pre> Order creation It is recommended to call this snippet in junction with the others on ordering page: [[!msCart]] <!-- Cart view and change, hidden after order creation --> [[!msOrder]] <!-- Ordering form, hidden after order creation --> [[!msGetOrder]] <!-- Order information display, showed after its creation --> Writing letters This snippet is used by miniShop2 class for writing mail notification to customers, if you switch on such sending in status settings. All the letters expand single basic mail template tpl.msEmail and change its blocks by default: - logo - logo of the shop with home page reference - title - letter title - products - ordered goods table - footer - site reference in letter footage For example, the letter with new customer's order, is: {extends 'tpl.msEmail'} {block 'title'} {'ms2_email_subject_new_user' | lexicon : $order} {/block} {block 'products'} {parent} {if $payment_link?} <p style="margin-left:20px;{$style.p}"> {'ms2_payment_link' | lexicon : ['link' => $payment_link]} </p> {/if} {/block} As you can see, the main template is inherited, the title is changed, and payment reference is added to the table of goods (if any). More details of template expanding you may find in Fenom documentation.
https://docs.modx.pro/en/components/minishop2/snippets/msgetorder
2019-05-19T17:51:27
CC-MAIN-2019-22
1558232255071.27
[]
docs.modx.pro
- Utility Links are located on the Ribbon Bar at the very top of the site. There are many standard SharePoint editing features available here. The following links are available: A collaboration tool that allows you to share the site with others Sync This feature synchronizes your files to your computer or devices. See Sync Document Libraries for more information. Edit The Edit Icon (the pencil) opens the page for editing. All the apps are visible, and the empty apps that can be used to add content are there. To find out more about adding and removing apps, see Apps. In the top right corner, if the words "sign in" are present, you need to sign in to access the site and the editing within. Sign in with your regular school district ID and password. Once you are signed in the other items that appear are listed: - About Me - takes you to your profile, which can then be edited - Sign in as Different User - is useful when testing and editing the site - signing in as a student if you have that capability in order to see how Assignments work, for example - Sign Out - how you sign out when leaving the session Settings The Settings Menu provides access to important options and settings. For more information see, Settings Menu.
https://docs.scholantis.com/display/PUG2013/Utility+Links
2019-05-19T17:37:20
CC-MAIN-2019-22
1558232255071.27
[]
docs.scholantis.com
Set up Kerberos for Workflow Manager View If you install Ambari using Kerberos, the Kerberos settings for Oozie that are required for Workflow Manager are configured automatically. - In Ambari Web, browse to . - On the Advanced tab, navigate to the Advanced Oozie-site section. - Verify that the properties match those shown in the following figure.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/configuring-ambari-views/content/amb_set_up_kerberos_for_workflow_manager_view.html
2019-05-19T17:22:47
CC-MAIN-2019-22
1558232255071.27
[]
docs.hortonworks.com
Contents IT Service Management Previous Topic Next Topic Configure keyword search for catalog items Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Configure keyword search for catalog items When. Before you beginRole required: admin About this taskThe search results are sorted in the following order based on the frequency of the keyword: Catalog items whose Name field contains the keyword (topmost). Catalog items whose Meta field contains the keyword. Catalog items whose Description or Short description field contains the keyword. Categories whose Name or Description field contains the keyword. Catalogs whose Name or Description field contains the key term, or the catalog items whose Class field contains the keyword (bottom). Note: If your organization has multiple service catalogs, a search returns results only from the catalog being viewed. Search results return an item only when the item is active, has a valid catalog and category association, and you are authorized to view the item. Procedure Complete the following steps to regenerate a text index for the sys_metadata table. Navigate to System Definition > Text Indexes. Open the text index for the Application File [sys_metadata] table. Click the Regenerate Text Index related link and click OK. The system schedules the table for the text indexing. Complete the following steps to enable the Did you mean suggestions. Navigate to System Properties > Text Search. Under the Did You Mean Properties section, enable the Suggest alternate search spellings for knowledge, catalog or global search property. Related TasksCreate or edit a catalog itemCreate a record producerCreate an order guideDefine a content itemRelated ConceptsService Catalog for managers and end usersRelated TopicsSearch administrationRegenerate a text index for a tableConfigure a "Did You Mean?" suggestion On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/service-catalog-management/task/search-catalog-item.html
2019-05-19T16:57:32
CC-MAIN-2019-22
1558232255071.27
[]
docs.servicenow.com
Gremlin Query Hints repeatMode The Neptune repeatMode query hint how the Neptune engine evaluates the repeat() step in a Gremlin traversal: breadth first, depth first, or chunked depth first. The evaluation mode of the repeat() step is important when it is used to find or follow a path, rather than simply repeating a step a limited number of times. Syntax The repeatMode query hint is specified by adding a withSideEffect step to the query. g.withSideEffect('Neptune#repeatMode', ' mode'). gremlin-traversal Note All Gremlin query hints side effects are prefixed with Neptune#. Available Modes BFS Breadth-First Search. Default execution mode for the repeat()step. This gets all sibling nodes before going deeper along the path. This version is memory-intensive and frontiers can get very large. There is a higher risk that the query will run out of memory and be cancelled by the Neptune engine. This most closely matches other Gremlin implementations. DFS Depth-First Search. Follows each path to the maximum depth before moving on to the next solution. This uses less memory. It may provide better performance in situations like finding a single path from a starting point out multiple hops. CHUNKED_DFS Chunked Depth-First Search. A hybrid approach that explores the graph depth-first in chunks of 1,000 nodes, rather than 1 node ( DFS) or all nodes ( BFS). The Neptune engine will get up to 1,000 nodes at each level before following the path deeper. This is a balanced approach between speed and memory usage. It is also useful if you want to use BFS, but the query is using too much memory. Example The following section describes the effect of the repeat mode on a Gremlin traversal. In Neptune the default mode for the repeat() step is to perform a breadth-first ( BFS) execution strategy for all traversals. In most cases, the TinkerGraph implementation uses the same execution strategy, but in some cases it will alter the execution of a traversal. For example, the TinkerGraph implementation will modify the following query. g.V(" 3").repeat(out()).times(10).limit(1).path() The repeat() step in this traversal will be "unrolled" into the following traversal, which will result in a depth-first ( DFS) strategy. g.V(<id>).out().out().out().out().out().out()out().out().out().out().limit(1).path() Important The Neptune query engine will not do this automatically. Breadth-first (BFS) is the default execution strategy, and is similar to TinkerGraph in most cases, however there are certain cases where depth-first ( DFS) strategies are preferable. BFS (default) Breadth-first (BFS) is the default execution strategy for the repeat() operator. g.V(" 3").repeat(out()).times(10).limit(1).path() The Neptune engine will full explore the first nine-hop frontiers fully before finding a solution ten hops out. This effective in many cases, such as shortest-path query. However, in the case of the preceding example, the traversal would be much faster using the depth-first ( DFS) mode for the repeat() operator. DFS The following query uses the depth-first ( DFS) mode for the repeat() operator. g.withSideEffect("Neptune#repeatMode", "DFS").V(" 3").repeat(out()).times(10).limit(1) This follows each individual solution out to the maximum depth before exploring the next solution.
https://docs.aws.amazon.com/neptune/latest/userguide/gremlin-query-hints-repeatMode.html
2018-12-10T02:27:46
CC-MAIN-2018-51
1544376823236.2
[]
docs.aws.amazon.com
GetRedirectedURL Method The GetRedirectedURL method of the IUrlAccessor interface returns a redirected URL for the current item. Parameters WszRedirectedURL[] [out, length_is(*pdwLength), size_is(dwSize)] Pointer to a wszRedirectedURL that contains a string buffer where the redirect URL will be written. dwSize [in] Pointer to a DWORD that contains the size of the wszRedirectedURL string buffer. pdwLength [out] Pointer to a DWORD that contains the number of characters written to the wszRedirectedURL string buffer. Return Value This method should return PRTH_E_NOT_REDIRECTED if not implemented. For a list of error messages returned by Microsoft Office SharePoint Portal Server 2003 protocol handlers, see Protocol Handler Error Messages. Remarks If the GetRedirectedUrl method is implemented, the URL that is passed to the CreateAccessor method is redirected to the value returned by this method. All subsequent relative URL links are processed based on the redirected URL. Requirements Platforms: Windows Server 2003
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint2003/dd584796(v=office.11)
2018-12-10T02:49:20
CC-MAIN-2018-51
1544376823236.2
[]
docs.microsoft.com
Sending Your First Web Notifications¶ You will able to create your notification via dashboard. Frontuser offers two different types of notification delivery options. The 2 different types of notification are: To create new web push notification click on "Notification" link from left navigation, and press "Create Notification" button from listing page. Choose notification type Broadcast or Trigger that suitable for your campaign. Once you select your preferred notification type you will see an multi-step wizard form for notification. In multi-step process, Name & Content section contains similar fields for Broadcast & Trigger based notification. Create New Web Notification¶ Name¶ Enter your notification name (assume this as campaign name) to recognize your notification in your dashboard. Here, you can also enable "Sandbox Mode" for this notification. Content¶ In this section, you can assign Title, Content, Image URL, Landing URL, and UTM parameters for your notification. Whatever changes you made in this section, you will able to see notification preview in right "Preview" block. Targeting¶ You can defined your targeted audience and sending date time or behaviour according to your notification type. If you choose broadcast based notification then you will able to schedule for specific date time and if your notification is trigger behavioural then you can define trigger rule within this step. Audience¶ Define your notification targeted audience within your existing segementation list or create new rule to redefine your audience. Broadcast¶ This options is appear only if your notification is broadcast. If you want to send your notification at specific date time or regular basis then these option is suitable for your campaign. Broadcast notification will be send to target audience on schedule datetime. For notification scheduling, Frontuser provide 4 different options: - Once, when activated : On selecting this option notification will be send immediate after save the notification. - Once, schedule at a specific date and time : This will send notification on selected date time. - Recurrently at a specific time : If you choose this option your notification will be send recursively with respect to the defined time slot. You can set Daily, Weekly or Monthly as sending frequency and prefered time for sending. - Yes, deliver to each user at a specified time of day in their own local time zone : If you want to send notification on audience local timezone, then choose this option and enter your set sending time. Above option only appears in Broadcast type notification Trigger Behavioural¶ If you want to send notification according to user action like abandoned product/cart page, visit specific product page, etc. create trigger based notification and define your trigger rule. Once rule criteria will match with user actions notification will be trigger. You can define trigger rule inside Audience rule section, and choose trigger behaviour. There are 2 different types of trigger behaviour they are: - Page Load: If select this option notification will be schedule on page load when trigger condition rule is matched. - Exit: As alternative to page load this will scheduled notification on page exit when condition rule is matched. Frontuser allows to cancel scheduled notification on contrast to trigger notification. For e.g if notification is schedule to send after 30 minutes of abandoned cart and customer will visit before that time notification in such case scheduled notification will be destroyed. To do so, create Cancel Trigger rule to prevent sending of all those scheduled notification. Above option only appears in Trigger based notification Notification Expiration (Time To Live)¶ This allows you to specify the amount of time that the push notification service, such as Apple Push Notification Service (APNS) or GCM, has to deliver the message to the endpoint. If for some reason (such as the mobile device has been turned off) the message is not deliverable within the specified TTL, then the message will be dropped and no further attempts to deliver it will be made. All Done! Click on Create Notification button and your notification is created. What's next, personalized your web push content to increase your conversion.
https://docs.frontuser.com/webpush/send-web-push/
2018-12-10T03:10:03
CC-MAIN-2018-51
1544376823236.2
[]
docs.frontuser.com
TOPICS× Create a report suite Steps that describe how to create a report suite, and to copy a report suite's settings to a new one. - Click Analytics > Admin > Report Suites. - Select a report suite. -.
https://docs.adobe.com/help/en/analytics/admin/manage-report-suites/new-report-suite/t-create-a-report-suite.html
2019-07-15T20:19:43
CC-MAIN-2019-30
1563195524111.50
[]
docs.adobe.com
Encryption on the Wire Couchbase encrypts the data moving between client and server, between servers within a cluster, and between data centers. Data moving between client and server Data moving between client and server needs to be protected from any attackers eavesdropping on the connection. Couchbase Server enables encrypted data access using SSL/TLS for client-server communications. - Secure administrative access Couchbase Server also includes support for secure administrative access, which enables administrators to administer the server securely through the browser using a public network. - Secure data access When you enable SSL/TLS, data in transit to and from the server is encrypted using the server certificate configured and stored in the client certificate store. Data Moving Between Servers within a Cluster Your data has to be available all the time (24x7x365), and your applications must be able to access that data even if any of the servers in the cluster dies. To ensure high availability, Couchbase Server replicates data within the cluster and across data centers. If you encrypt all your sensitive data in the documents, the replica copies will be transmitted as is (encrypted) and stored. For added security, it is a good security practice to use IPSec on the network that connects the Couchbase server nodes. Data Moving Between Data Centers To protect sensitive data transmitted among data centers in different geo-locations, use the secure XDCR (Cross Datacenter Replication) feature. Secure XDCR enables you to encrypt traffic between two data centers using an SSL/TLS connection. When you use secure XDCR, all traffic in the source and destination data centers will be encrypted. Encryption causes a slight increase in the CPU load to allow for additional CPU cycles. It is a good security practice to rotate the XDCR certificates periodically, as per your organization’s security policy. Disabling the Couchbase Web Console on Port 8091 If you would like to force Administrators to log in to the UI over an encrypted channel, you can disable the UI over the 8091 HTTP port, so that administrators can only access the administrative web console over port 18091. To disable the Couchbase Web Console over port 8091: curl -X POST -u Administrator:password \ -d 'ns_config:set(disable_ui_over_http, true)’ To re-enable the Couchbase Web Console over port 8091: curl -X POST -u Administrator:password \ -d 'ns_config:set(disable_ui_over_http, false)’
https://docs.couchbase.com/server/4.1/security/security-comm-encryption.html
2019-07-15T20:18:48
CC-MAIN-2019-30
1563195524111.50
[]
docs.couchbase.com
States for managing zpools Jorge Schrauwen <[email protected]> new salt.utils.zfs, salt.modules.zpool smartos, illumos, solaris, freebsd, linux name of storage pool export instread of destroy the zpool if present force destroy or export salt.states.zpool. present(name, properties=None, filesystem_properties=None, layout=None, config=None)¶ ensure storage pool is present on the system name of storage pool optional set of properties to set for the storage pool optional set of filesystem properties to set for the storage pool (creation only) disk layout to use if the pool does not exist (creation only) fine grain control over this state Note import (true) - try to import the pool before creating it if absent import_dirs (None) - specify additional locations to scan for devices on import (comma-seperated) device_dir (None, SunOS=/dev/dsk, Linux=/dev) - specify device directory to prepend for none absolute device paths force (false) - try to force the import or creation!
https://docs.saltstack.com/en/latest/ref/states/all/salt.states.zpool.html
2019-07-15T20:02:35
CC-MAIN-2019-30
1563195524111.50
[]
docs.saltstack.com
5.8.0 Updates Introducing Grid View `. try { Slyce.getInstance(context).getTheme().setAppearanceStyle("appearance_searchResultsType", SearchResultsListType.GRID); } catch (SlyceNotOpenedException e) { e.printStackTrace(); } Single Search Mode Enhancements . Updated Exception Handling In 5.8, we've made some improvements to the ways developers using the SDK can handle common errors. Some methods previously threw a SlyceError, which was a non-specific runtime exception class. These methods have been changed to throw specific, checked exceptions for clarity. Also, we took this opportunity to improve the package structure of existing exceptions. New Exceptions We added a few new exceptions for better error granularity. - SlyceLensException - abstract base class for lens-related exceptions. - SlyceInvalidLensException - a given lens identifier invalid, probably because it has not been configured for your account. - SlyceScannerUnsupportedException - a Scanner could not be created for the given lens identifier, probably because that lens is not configured with any local detectors. - SlyceStorageException - abstract base class for storage-related exception. - SlyceDatabaseException - a database error occurred. Currently, a database is only used for saving searches in Search History. - SlyceGeneralStorageException - an unspecified storage error. These should be rare. Updated Packages for Existing Exceptions Any import statements for the following exceptions will need to be updated. - SlyceInvalidSessionException moved to it.slyce.sdk.exception.session.SlyceInvalidSessionException - SlyceMissingGDPRComplianceException moved to it.slyce.sdk.exception.initialization.SlyceMissingGDPRComplianceException - SlyceNotOpenedException moved to it.slyce.sdk.exception.initialization.SlyceNotOpenedException - SlyceSearchTaskBuilderException moved to it.slyce.sdk.exception.searchtask.SlyceSearchTaskBuilderException Method Changes The following methods' signatures were changed. Any code calling these methods will need to be updated to handle the additional exceptions. - SlyceLensView initmethods now declared as throws SlyceInvalidLensException - SlyceSession createScannermethods are now declared as throws SlyceLensException , SlyceInvalidSessionException - SlyceFragment initLensViewis now declared as throws SlyceInvalidLensException Bug Fixes - Improved detection when selecting picture from camera roll by including the point the user taps on. - We add some more robust checks to ensure the app does not crash when placed in the background. - We updated the logic for when the tooltip appears and disappears to ensure it doesn't peak out from behind layers. 5.8.1 Release Notes - On certain devices, a user was prevented from selecting an image from their photo gallery. We resolved that issue and updated the UI to better transition into the loading screen. - In the rare case when a user disables the permissions for the camera for the app, then returns to the app it would cause a crash. We are now properly checking for permissions when the app resumes. 5.8.2 Release Notes - In some cases the user-generated thumbnail would remain on the screen after the first search resolved and prevent the user from searching again. This issue has been resolved. - The clear history button will now utilize the `global_accent_color` - When clearing the search history if you tapped quickly while the popup was animating on, in some cases, it would cause a crash. This is now being handled properly to avoid the crash.
https://docs.slyce.it/hc/en-us/articles/360024644511--Android-5-8-Release-Notes-
2019-07-15T20:09:10
CC-MAIN-2019-30
1563195524111.50
[array(['/hc/article_attachments/360022883552/Grid_View_2x.png', 'Grid_View_2x.png'], dtype=object) array(['/hc/article_attachments/360023629391/Single_Search_to_Client.gif', 'Single_Search_to_Client.gif'], dtype=object) ]
docs.slyce.it
transformNodeToObject Method (Windows CE 5.0) Processes this node and its children using the supplied Extensible Stylesheet Language Transformations (XSLT) style sheet and returns the resulting transformation. [Script] Script Syntax oXMLDOMNode.transformNodeToObject(stylesheet,outputObject); Script Parameters - stylesheet Object. Valid XML document or DOM node that consists of XSLT elements that direct the transformation of this node. - outputObject Object. On return, contains the product of the transformation of this XML document based on the XSLT style sheet. If the variant represents the DOMDocument object, the document is built according to its properties and its child nodes are replaced during this transformation process. The XML transformation can also be sent to a stream. [C/C++] C/C++ Syntax HRESULT transformNodeToObject(IXMLDOMNode* stylesheet,VARIANToutputObject); C/C++ Parameters - stylesheet [in] Valid XML document or DOM node that consists of XSL elements that direct the transformation of this node. - outputObject [in] Object that contains the product of the transformation of this XML document based on the XSLT style sheet. If the variant represents DOMDocument, the document is built according to its properties and its child nodes are replaced during this transformation process. If the variant contains an IStream interface, the XML transformation is sent to this stream. C/C++ Return Values - S_OK Value returned if successful. - E_INVALIDARG Value returned if stylesheet or outputObject is Null. Requirements OS Versions: Windows CE .NET 4.0 and later. Header: Msxml2.h, Msxml2.idl. Link Library: Uuid.lib. General Remarks This method is only valid if the XSLT feature has been included in the operating system (OS). If a call to this method is made and XSLT is not supported, an error message will be returned. The stylesheet parameter must be either a DOMDocument node, in which case the document is assumed to be an XSLT style sheet, or a Document Object Model (DOM) node in the XSLT style sheet, in which case this node is treated as a stand-alone style sheet fragment. The source node defines a context in which the style sheet operates, but navigation outside this scope is allowed. For example, a style sheet can use the id function to access other parts of the document. This method supports both stand-alone and embedded style sheets and also provides the ability to run a localized style sheet fragment against a particular source node. The transformNodeToObject method always generates a Unicode byte-order mark, which means it cannot be used in conjunction with other Active Server Pages (ASP) Response.Write or Response.BinaryWrite calls. This member is an extension of the World Wide Web Consortium (W3C) DOM. This method applies to the following objects and interfaces: IXMLDOMAttribute, IXMLDOMCDATASection, IXMLDOMCharacterData, IXMLDOMComment,, IXTLRuntime. Send Feedback on this topic to the authors
https://docs.microsoft.com/en-us/previous-versions/windows/embedded/ms896488%28v%3Dmsdn.10%29
2019-07-15T21:03:31
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
The first time you take a look at Epopée.me, it feels like this project is a very minimalist interactive documentary. In fact, the interaction goes beyond the digital world and expands into the streets of Montréal where marginalized people struggle to live their life. Rodrigue Jean, who started the project, tells us a bit more about Epopée. I-docs.org: Can you tell us when, how, why and with whom the project started? Rodrigue Jean: The project started 5 years ago during the shoot of «Men for Sale» — a documentary about sex workers in Montreal. Half way through the project, the men who were taking part in the documentary requested that we moved on to create another project which would be based on fiction. The team was more than willing to do so, as we were starting to feel awkward in the position of recording the«intensity» of the men’s lives. We could give something back by facilitating a process where stories and technical abilities could be shared. ‘Men for Sale’ had lasted for a year and a half followed by another year of editing. So immediately after I started raising money for the production of Épopée. I-docs.org: The introduction says it’s an evolving project, can you tell us more about that? RJ: Épopée is based on writing workshops held twice a week in a drop-in centre for sex workers. The participants come up with stories which are usually closely related to their lives. We work for about two months on each participant’s stories, teaching basic scriptwriting techniques as part of the process. The script is then put into production and goes online as soon as it is edited. Participants can view their work online as they start a new writing project. I-docs.org: Have you planned new stories for 2012? RJ: Épopée has been going on for twice as long as planned. Those participants who couldn’t write or did not want to, asked us instead to film them as they went about their daily activities. This was quite different from the approach of «Men for Sale», a documentary which interweaved narratives relying solely on speech. In «Men for Sale» participants were never seen selling sex or taking drugs. We felt that such exposure would be detrimental to them. I-docs.org: There’s a very interesting story about how you created the logo, can you explain the creative process behind it and its meaning for the project? RJ: In Montreal, as in many big cities in the world, zones of exclusion have being created. The pretexts are many: security, policing, health concerns, cultural rationales, etc. «Men for Sale» and «Épopée» take place in an area where, as well as shops and apartments, activities related to sex work, the commerce of drugs, gay bars and saunas are concentrated. People living on the street call this zone, «the box». Resource centers dealing with homelessness, drug use and HIV prevention are also situated in «the box». In this area of town, the police habitually impose an inordinate number of fines to people either living or spending a lot of time on the street. When a person has too many of those these fines a judge will issue an injunction to stop the person accessing the area. The expression to describe this situation is: «I’m boxed». Hence the rectangle at the start of each clip you view online. I-docs.org: In epopee, fiction and reality are tightly connected, how did you manage to “play” with these very different registers? RJ: The mix between fiction and documentary happens by itself — screenwriters are often acting in their own stories albeit not necessarily as their own «character». This same person might also appear in the documentaries in situations not unrelated to the fictions, etc. I-docs.org: How did participants of the project react? Did-it provoke unexpected responses from them? RJ: I think when you really get involved in any work — creative work — it has an effect on your life. Usually you also want to be better next time round. Having said that, psychoanalysis tells us that drug addiction bars access to symbolization. Since writing stories and making films is mainly about symbolization, one can imagine that «something happens» when a person gets involved in that process. I-docs.org: How did you manage to engage participants in the creative process? Was it hard to convince them to play characters? RJ: The acting part of the process is quite similar to the work we do with professional actors. Only here the actors don’t have to learn new tricks since the situations we film are always close to what the participants already know. It is more of a process of simplifying. I-docs.org: The technology you’ve been using for the platform is HTML5, why didn’t you choose flash, did you have different versions of the site, what were the challenges in using that kind of technology. RJ: Since Epopée is about accessibility and most popular browsers support HTML5, it seemed an obvious choice. I-docs.org: The interface doesn’t relate to any kind of narrative form, as it’s usually the case with interactive documentaries, was this a choice? RJ: I think more complex interactive narratives can arise for viewers when they are presented with open ended «events». Interaction in that sense seems to belong to the early days of the web. Isn’t the web now becoming a relay between life events? For a lot of people war has now moved from the video game to the street. Only it’s not a game anymore. I-docs.org: The use of full screen video and minimalist design obviously reflects a “parti pris”; can you tell us more about that? RJ: The intention was to create what we imagined could be a «cinema for the web». Epopee.me is a second version of Épopée. The first site was working with pop-ups as it is sometime used in porn adverts on the web. It was much more tricky to operate and we felt it belong to an earlier era of the web. I-docs.org: What are the reactions so far to the project in terms of audience/participants/professionals/traffic? RJ: The web was meant to be the first window for the project, followed by art gallery installations and screenings in cinemas. All this has happened in Quebec. We are now looking to bring the project to different cities. I-docs.org: Did the project change the participants’ view on themselves? Can you elaborate on that notion? RJ: Insofar as 30 persons whose lives are fairly unsettled have been dedicated to writing, acting and producing Épopée for the last two years. I would say that their lives have indeed been affected. I-docs.org: If you wish to add anything else about the project or the people involved, please feel free to do so. RJ: Épopée is also the result of 25 film technicians who have been coming together to create a situation which is out of the ordinary. They have not been in it for financial reward but for the sake of creating something as a group.
http://i-docs.org/2012/02/28/epopee-me/
2019-07-15T20:26:22
CC-MAIN-2019-30
1563195524111.50
[]
i-docs.org
Official Release Date - March 26, 2019 Download - Build 4.10.02 Modified features V-Ray for Katana - Allow adding statement path by shift-drag a VrayVolumeGrid_In or VrayMesh_In node onto MaterialAssign's CEL parameter - Implement preview for Directional LightRectangle in the Hydra Viewer - Update the parameters of the Denoiser render element - Expose BRDFHair4 in the shader list for Material nodes - Make it possible to export .vrscene files in batch mode - Add links in the inline documentation for V-Ray nodes which point to our documentation site - Switch the VRayMtl's roughness model to Oren-Nayar - Switch Displacement to Pre-tessellated mode by default - Switch Subdivision surfaces to Pre-tessellated mode by default Bug Fixes V-Ray for Katana - Fixed crash when trying to export vrscene to а non-existent folder - Fixed Python script errors when running Katana in batch mode - Make exporter to be more strict and robust V-Ray - Direct light selects are always propagated through refractions - Incorrect lighting elements on matte objects when "Consistent lighting elements" option is enabled - Fixed incorrect consistent lighting elements with alSurface and VRayFastSSS2 VRayVolumeGrid - Fixed rendering slowdown when the memtracker is enabled during compilation
https://docs.chaosgroup.com/display/VNFK/V-Ray+Next%2C+Hotfix+1
2019-07-15T20:53:29
CC-MAIN-2019-30
1563195524111.50
[]
docs.chaosgroup.com