content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
resources (for example, DB instances) that have at least one pending maintenance action. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. describe-pending-maintenance: PendingMaintenanceActions describe-pending-maintenance-actions [--resource-identifier <value>] [--filters <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>] --resource-identifier (string) The ARN of a resource to return pending maintenance actions for. --filters (list) A filter that specifies one or more resources to return pending maintenance actions for. Supported filters: - db-cluster-id - Accepts. with at least one pending maintenance action The following describe-pending-maintenance-actions example lists the pending maintenace action for a DB instance. aws rds describe-pending-maintenance-actions Output: { "PendingMaintenanceActions": [ { "ResourceIdentifier": "arn:aws:rds:us-west-2:123456789012:cluster:global-db1-cl1", "PendingMaintenanceActionDetails": [ { "Action": "system-update", "Description": "Upgrade to Aurora PostgreSQL 2.4.2" } ] } ] } For more information, see Maintaining a DB Instance in the Amazon RDS User Guide. PendingMaintenanceActions -> (list) A list of the pending maintenance actions for the resource. (structure) Describes the pending maintenance actions for a resource. ResourceIdentifier -> (string)The.. Marker -> (string) An optional pagination token provided by a previous DescribePendingMaintenanceActions request. If this parameter is specified, the response includes only records beyond the marker, up to a number of records specified by MaxRecords .
https://docs.aws.amazon.com/cli/latest/reference/rds/describe-pending-maintenance-actions.html
2021-01-16T03:15:39
CC-MAIN-2021-04
1610703499999.6
[]
docs.aws.amazon.com
Board View v2 We've reinvented Board view from the ground up with features such as: - Sorting - Hide empty columns - Collapse columns - Redesigned cards - One-click closing tasks - Multitask Toolbar - Edit columns - Cover images - View all tasks - Archive tasks (or an entire column) Pin Cover Images By default, ClickUp now sets the most recent image as a preview in Board view. You may also customize which image is always shown by Pinning an image to a task. When you open a Task, this image will be shown at the top - front and center. PS - don't want to see images in Board view? You don't have to - we made it optional. View All Projects and Spaces in Board view - When viewing all Projects or Spaces in Board View, all possible custom status columns will be shown - Dragging a task through the board will automatically hide columns where the task can't be dragged to. Multitask Toolbar in Board view - Select individual tasks in board view or an entire column to mass edit tasks with the Multitask Toolbar Archive Tasks Archiving a task removes it from view but allows you to restore it at anytime. All tasks are preserved and are still searchable. + bulk archive tasks with the Multitask Toolbar or an entire column in Board view. Collapse Columns in Board view Remove columns that aren't important to you by collapsing the status in Board View Hide Empty Columns in Board view Get rid of the noise! You now have the option to hide all columns that don't have tasks, so you can declutter and see what matters. Improvements GitLab: Support for Self-hosted - You may now connect self-hosted GitLab installations to your ClickUp team Exports - Exports now include columns for tags, checklists, and time estimates - This includes team exports as well as reporting exports Multitask Toolbar - Add dependencies - Archive tasks
https://docs.clickup.com/en/articles/2267034-release-1-44
2021-01-16T03:36:57
CC-MAIN-2021-04
1610703499999.6
[]
docs.clickup.com
You have to register a Liquid account and pass the account verification in order to use their service. Go to Liquid.com and register an account, simply fill in the details however please fill in your [Legal First Name] [Legal Last Name] as account verification is required Liquid will immediately send you a verification email, click on the link in the email and activate your account. The service of crypto exchange adopts a high security standard, therefore users have to set up 2FA before utilizing their services. 1. After completing the basic info and activated your account, the following appears when you login again, click "Enable 2-Factor Authentication" 2. Download authenticator app on your mobile, for example Google Authenticator Google Play Download App Store Download Scan the QR Code, then fill in the 6-digit code generated by Google Authenticator Fill in the code correctly and the set up of 2-Factor Authentication is completed. After completing 2FA, you can have a look at the exchange panel but the only thing you can do is deposit. To facilitate withdrawal, account verification has to be completed, please follow the steps: 1. Check your account status by clicking on the top right hand corner of your avatar, it shows "Pending" 2. Click on the above circled in red, which brings you to the page below. Click on the button below and start to submit your personal credentials for account verification 3. You have to provide the following: National ID Document Selfie Proof of Address e.g. Utility bills, government correspondents, etc, The document should have your legal name on it Submit the documents and wait for Liquid to confirm your application. It takes a few days for it to go through if everythings is all good. Liquid Account Verification & Identification Changes How do I submit the required KYC documents? If you have any questions about using Liquid and their products, please check Liquid Help Center or contact Liquid customer service. Click on the bubble at the lower right hand corner of Liquid website or Liquid Help Center and find instant support.
https://docs.like.co/user-guide/likecoin-token/registering-on-liquid
2021-01-16T03:35:59
CC-MAIN-2021-04
1610703499999.6
[]
docs.like.co
This guide outlines the post-deployment Day-2 operations for a Mirantis OpenStack for Kubernetes environment. It describes how to configure and manage the MOS components, perform different types of cloud verification, and enable additional features depending on your cloud needs. The guide also contains day-to-day maintenance procedures such as how to back up and restore, update and upgrade, or troubleshoot your MOS cluster.
https://docs.mirantis.com/mos/latest/ops-guide/intro.html
2021-01-16T02:27:13
CC-MAIN-2021-04
1610703499999.6
[]
docs.mirantis.com
How to Define REST Interfaces in InterSystems IRIS How to Define REST Interfaces in InterSystems IRIS There are two ways to define REST interfaces in InterSystems IRIS: Define an OpenAPI 2.0 specification and then use the API Management tools to generate the code for the REST interface. Manually code the REST interface, then define a web application in the Management Portal. This First Look shows how to manually code the REST interface, including developing a dispatch class. If you prefer to use the specification-first method, these dispatch classes are generated automatically and should not be edited. Advantages of using the specification-first definition include the ability to automatically generate documentation and client code from the specification, but you can use either way to define REST interfaces. For more information about defining REST interfaces using a specification, see Creating Rest Services.
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_rest_define
2021-01-16T02:43:51
CC-MAIN-2021-04
1610703499999.6
[]
docs.intersystems.com
The group met 11+ times this quarter. Recent Activities First Activity Accessibility/Streaming SubCommittee (last 6 months): * Met on February 12, 2020 and April 13, 2020 * We can make a call for volunteers for shepherding sessions as a way to get people involved to meet promotion/tenure requirements (badges in ALA Connect?) * ADAhospitality.org is a great resource to rely upon to ensure Forum is accessible to attendees *Forum is transitioning to a Virtual Conference *Project management will be important for tasks that need to get done (we can use tools like, Trello) **Each subcommittee will have distinct roles for completing these tasks **Berika will set up a Trello Board for the entire Forum Planning committee. *Accessibility will always remain a top consideration: **Presenters will need to be trained on accessibility to describe visuals for those who are visually impaired and captioning (or ASL live translation) will also be needed **There also needs to be support for call-in users and those with low vision impairment **Carli Spina is still slated to give a webinar to accepted speakers on accessibility; she will be updated on Forum and followed up with ** We need to first address the overall structure of streaming and offering a Virtual Forum with these considerations **We need to enforce standards for accessibility for everyone involved *LITA Office will be handling logistics (incl. streaming with the Platform selected), this subcommittee is to focus solely on accessibility of events) * Carli Spina released a pre-recorded webinar for accepted Forum session speakers on "Accessibility for Presenters" Meets LITA’s strategic goals for Member Engagement Second Activity Programming Subcommittee (last 6 months): * Met on March 10, 2020; June 2, 2020; July 7, 2020, July 29, 2020 * Call for proposals extended 3 times from original March 30th date (April 8, May 22, and June 12) * Registration will open soon, waiting on ALA to finish the registration form * We can use FormStack (or alternative) to evaluate each session proposal * To reduce bias in evaluation of submissions, identifying information will be removed (Name, Title, Institution) but Institution Type will be present (Academic, Public, School, Special, etc.) * Total of 80 session proposals and a handful of preconference proposals were received * *Preconference proposals were emailed about possible consideration in becoming a webinar/web course for LITA, ALCTS, and LLAMA respectively **Session are still in the process of being selected **All proposals will know their status by the end of August 2020 Meets LITA’s strategic goals for Education and Professional Development Third Activity Sponsorship Subcommittee (last 6 months): * March 5, 2020; June 11, 2020 * After obtaining feedback about sponsorship opportunities for mid-size and smaller companies, we are revising our Sponsorship Propectus and making current sponsor, a top tier sponsor * On 4/6/20, sponsorship activities were suspended until further guidance is received regarding the COVID-19 public health crisis about Forum Planning (SEE email dated 4/13 on COVID-19 Update.) *Meeting resumed in June *Updated emails informing potential sponsors of Core Virtual Forum went out in July 2020 Meets LITA’s strategic goals for Organizational Stability and Sustainability Fourth Activity UX/Sustainability Subcommittee (Last 6 months): * Met on March 6, 2020; May 01, 2020; June 5, 2020; July 10, 2020 * Discussed recruiting additional local arrangements subcommittee members/volunteers for the local arrangement subcommittee by reaching regional listservs (when we thought Forum was going to be in-person) * Had questions for the LITA Office which has since been addressed: * On 4/6/20, UX/Sustainability and Local Arrangements activities were suspended until further guidance is received regarding the COVID-19 public health crisis about Forum Planning (SEE email dated 4/13 on COVID-19 Update). * Resumed meetings on 05/01/20 *Met with Emerging Leaders for their project on sustainability for future in-person Forums *Discussed ideas for structured socialization: **Virtual Lounge Zoom room **Collaboration Room **Schedule breaks (chair yoga, magic show) **Evening Social Events (game night, happy hour, escape room, netflix party) *Establishing a sense of place for attendees **The goal here is to create an environment and experience so that attendees feel like they have a connection to their fellow attendees and eliminate the sense of being with 100 strangers on the internet and replace it with a sense of being among colleagues. *Create a map of virtual "rooms" or spaces where different activities/conversations are taking place so that people have a visual diagram of what's going on and available to them. **Assign attendees to a "table" that will then determine their breakout room **Send an optional swag-bag to attendees with a handful of items that may be useful during the conference to first 100-200 registrants. *Session moderation and speaker preparation **Speaker preparation ***Work with Programming subcommittee to provide recommendations for incorporating interactive elements into the presentation (see the SUNYLA best practices doc for some recommendations) ***Work with Programming/speakers to hold a test practice session to ensure all presenters are acclimated to the tools. ***Session moderations – This topic actually didn't come up in the meeting, but I thought of it later. Is it the UX subcommittee or the programming subcommittee that will be responsible for coordinating session "hosts". We do this for the in person conference and I think it will be just as useful, if not more useful to have in the virtual forum. Meets LITA’s strategic goals for Member Engagement Fifth Activity Web/PR Subcommittee (Last 6 months): * Met on March 10, 2020; June 01, 2020 *Official Hashtag: #CoreForum2020 *Sent out Targeted communications to LITA, ALCTS, and LLAMA respectively about tracks that might be of interest to membership for Call for Proposals *We can work on a promotional video to promote Forum **YouTube offers royalty free audio and library clips where no attributions are needed *Language for updating the website with a COVID-19 contingency planning update (SEE COVID-19 Update email sent to Forum Planning Members, sent out, 4/13) – website will be updated to announce that Forum will go virtual *Discuss a weekly PR calendar **Every other Monday Berika will send out a draft in a Google Doc for promoting Forum **Friday that same week is deadline for Web/PR Subcommittee to provide edits/feedback on that draft **Following Monday final draft will be sent to Chrishelle (LITA), Brooke (ALCTS), and Fred (LLAMA) for distribution. *Volunteer to work on various portions of the Forum website **Statement of Appropriate Conduct Page â G. **Planning Committee Page â K. **Keynotes (will be published this week) — Berika will work with Web/PR Subcommittee on crafting an announcement **All other pages (forthcoming based on speaker selections from programming and content/work from other subcommittees) *Will work on article promoting Forum for ALCTS Meets LITA’s strategic goals for Member Engagement What will your group be working on for the next three months? * After LITA Office confirms contract with Learning Times, working with them to set Forum schedule with accessibility and UX considerations * Wait on ALA office to release registration * Complete accepting sessions for Forum and send all proposals a response * Complete planning of social events surrounding Forum *Follow up with potential sponsors again Please provide suggestions for future education topics, initiatives, publications, resources, or other activities that could be developed based on your group’s work. All non-accepted proposals will and those that indicated interest for publication will be placed in touch with the appropriate Core Committee for webinars/web course/web series and publications. Additional comments or concerns: For Core Forum going forward (after 2020): Appointments committee must establish a Forum Planning leadership team consisting of 2-3 overall chairs, AND a chair for each Forum subcommittee (Accessibility, Programming, Sponsorship, UX/Sustainability, Web/PR) to reduce stress/burn out on one person. This must happen upfront, before the committee is established. Submitted by Berika Williams on 08/04/2020
https://docs.lita.org/category/reporting-period/september/
2021-01-16T01:54:23
CC-MAIN-2021-04
1610703499999.6
[]
docs.lita.org
This section provides the overview of the tools used during the OpenStack upgrade. MCP Drivetrain provides the following OpenStack-related upgrade pipelines: Deploy - upgrade control VMs Deploy - upgrade computes Deploy - upgrade OVS gateway Each job consists of different stages that are described in details in the related sections below.
https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/update-upgrade/major-upgrade/upgrade-openstack/os-upgrade-tools.html
2021-01-16T02:42:15
CC-MAIN-2021-04
1610703499999.6
[]
docs.mirantis.com
Please check the build logs for more information. See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds. If you believe this is docs.rs' fault, open an issue. LinuxCNC HAL interface for Rust Please consider becoming a sponsor so I may continue to maintain this crate in my spare time! [Documentation](- A safe, high-level interface to LinuxCNC's HAL (Hardware Abstraction Layer) module. For lower level, unsafe use, see the linuxcnc-hal-sys crate. Development setup bindgen must be set up correctly. Follow the requirements section of its docs. To run and debug any HAL components, the LinuxCNC simulator can be set up. There's a guide here for Linux Mint (and other Debian derivatives). Project setup This crate depends on the linuxcnc-hal-sys crate which requires the LINUXCNC_SRC environment variable toi be set to correctly generate the C bindings. The value must be the absolute path to the root of the LinuxCNC source code. The version of the LinuxCNC sources must match the LinuxCNC version used in the machine control. # Clone LinuxCNC source code into linuxcnc/ git clone # Check out a specific version tag. This may also be a commit, but must match the version in use by the machine control. cd linuxcnc && git checkout v2.8.0 && cd .. # Create your component lib cargo new --lib my_comp cd my_comp # Add LinuxCNC HAL bindings as a Cargo dependency with cargo-edit cargo add linuxcnc-hal LINUXCNC_SRC=/path/to/linuxcnc/source/code cargo build Examples Create a component with input and output This example creates a component called "pins" with a single input ( "input-1") and output pin ( "output-1"). It enters an infinite loop which updates the value of output-1 every second. LinuxCNC convention dictates that component and pin names should be dash-cased. This example can be loaded into LinuxCNC with a .hal file that looks similar to this: loadusr -W /path/to/your/component/target/debug/comp_bin_name net input-1 spindle.0.speed-out pins.input-1 net output-1 pins.output-1 Pins and other resources are registered using the [ Resources] trait. This example creates a Pins struct which holds the two pins. [ HalComponent::new()] handles component creation, resources (pin, signal, etc) initialisation and UNIX signal handler registration. use linuxcnc_hal::{ error::PinRegisterError, hal_pin::{InputPin, OutputPin}, prelude::*, HalComponent, RegisterResources, Resources, }; use std::{ error::Error, thread, time::{Duration, Instant}, }; struct Pins { input_1: InputPin<f64>, output_1: OutputPin<f64>, } impl Resources for Pins { type RegisterError = PinRegisterError; fn register_resources(comp: &RegisterResources) -> Result<Self, Self::RegisterError> { Ok(Pins { input_1: comp.register_pin::<InputPin<f64>>("input-1")?, output_1: comp.register_pin::<OutputPin<f64>>("output-1")?, }) } } fn main() -> Result<(), Box<dyn Error>> { pretty_env_logger::init(); // Create a new HAL component called `rust-comp` let comp: HalComponent<Pins> = HalComponent::new("rust-comp")?; // Get a reference to the `Pins` struct let pins = comp.resources(); let start = Instant::now(); // Main control loop while !comp.should_exit() { let time = start.elapsed().as_secs() as i32; // Set output pin to elapsed seconds since component started pins.output_1.set_value(time.into())?; // Print the current value of the input pin println!("Input: {:?}", pins.input_1.value()); // Sleep for 1000ms. This should be a lower time if the component needs to update more // frequently. thread::sleep(Duration::from_millis(1000)); } // The custom implementation of `Drop` for `HalComponent` ensures that `hal_exit()` is called // at this point. Registered signal handlers are also deregistered. Ok(()) }.
https://docs.rs/crate/linuxcnc-hal/0.2.0
2021-01-16T02:45:33
CC-MAIN-2021-04
1610703499999.6
[]
docs.rs
fmbp16:~ root# diskutil umountDisk /dev/disk2 Unmount of all volumes on disk2 was successful fmbp16:~ root# fdisk -e /dev/disk2 fdisk: could not open MBR file /usr/standalone/i386/boot0: No such file or directory Enter 'help' for information fdisk: 1> f 1 Partition 1 marked active. fdisk:*1> write Writing MBR at offset 0. fdisk: 1> exit/ fmbp16:ESXi root# cat ISOLINUX.CFG | grep APPEND APPEND -c boot.cfg fmbp16:ESXi root# sed -i "" 's/APPEND -c boot.cfg/APPEND -c boot.cfg -p 1/g' ISOLINUX.CFG fmbp16:ESXi root# cat ISOLINUX.CFG APPEND -c boot.cfg -p 1
http://docs.gz.ro/macos-esxi-install-bootable-usb.html
2021-01-16T03:00:43
CC-MAIN-2021-04
1610703499999.6
[array(['http://docs.gz.ro/sites/default/files/styles/thumbnail/public/pictures/picture-1-1324065756.jpg?itok=rS4jtWxd', "root's picture root's picture"], dtype=object) ]
docs.gz.ro
ipinfoio_facts – Retrieve IP geolocation facts of a host’s IP address¶ Notes¶ Note - Check for more information Examples¶ # Retrieve geolocation data of a host's IP address - name: get IP geolocation data ipinfoio_facts: Returned Facts¶ Facts returned by this module are added/updated in the hostvars host facts and can be referenced by name just like any other host fact. They do not need to be registered in order to use them. Status¶ - This module is not guaranteed to have a backwards compatible interface. [preview] - This module is maintained by the Ansible Community. [community]
https://docs.ansible.com/ansible/2.9/modules/ipinfoio_facts_module.html
2021-01-16T03:32:36
CC-MAIN-2021-04
1610703499999.6
[]
docs.ansible.com
Genesys SMS Routing (CE29) for Genesys. Business and Distribution Logic Business Logic There is no applicable content for this section. Distribution Logic There is no applicable content for this section. User Interface & Reporting Customer Interface Requirements - Customer cellular phone that can send or receive SMS messages. Agent Desktop Requirements - Standard Genesys Cloud user interface. - Access to response library. - Admin and Architect access to provision and configure SMS numbers and flows. Reporting Real-time Reporting Genesys Cloud comes with a set of real-time dashboards, views, and reports. These views and reports work across all channels including messaging, which also shows all SMS messages. This feature enables supervisors to gain insight on the SMS traffic that the system handles. The following list outlines some of the key views available to exposed analytics data: - Interactions: A detailed view that provides information related to each conversation and shows every step along the way for an SMS message. - Queue Activity: Real-time view of the conversations waiting in queue. - Queue Performance: Queue Metrics specific to SMS volume, including the ability to get insight into SL, Handle Time, ACW, and other key metrics specific to SMS. - Agent Performance: Specific metrics around agents, including Handle Time, number of SMS conversations, and more. - Wrap-Up Performance: Detailed insight into selected wrap-up codes. - Skills Performance: Detailed insight and metrics specific to skills-based routing. - Several Canned Reports: Set of canned reports specific to the various needs from contact centers and specific to messaging. Genesys Cloud continuously releases new capabilities. For additional information and details on newly released analytics features, see the release notes on the Resource Center at help.mypurecloud.com. Historical Reporting See Above. Assumptions General Assumptions There is no applicable content for this section. Customer Assumptions - Customer secures and provisions a dedicated long code or text-enabled toll-free number, enabling them to send SMS messages in Genesys Cloud. Interdependencies All required, alternate, and optional use cases are listed here, as well as any exceptions. Cloud Assumptions - Each Genesys DC must purchase an SMS server to serve as the reverse proxy server for cloud customers. Related Documentation Document Version - V 1.0.0
https://all.docs.genesys.com/UseCases/Current/GenesysCloud/CE29
2021-01-16T02:34:59
CC-MAIN-2021-04
1610703499999.6
[]
all.docs.genesys.com
ansible.builtin.sh – POSIX shell (/bin/sh)¶ Note This module is part of ansible-base and included in all Ansible installations. In most cases, you can use the short module name sh even without specifying the collections: keyword. Despite that, we recommend you use the FQCN for easy linking to the module documentation and to avoid conflicting with other collections that may have the same module name. Synopsis¶ This shell plugin is the one you want to use on most Unix systems, it is the most compatible and widely installed shell.
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/sh_shell.html
2021-01-16T03:39:25
CC-MAIN-2021-04
1610703499999.6
[]
docs.ansible.com
. This family includes the M1 and M3 VM types. These types provide a balance of CPU, memory, and network resources, which makes them a good choice for many applications. The VM types in this family range in size from one virtual CPU with two GB of RAM to eight virtual CPUs with 30 GB of RAM. The balance of resources makes them ideal for running small and mid-size databases, more memory-hungry data processing tasks, caching fleets, and backend servers. M1 types offer smaller instance sizes with moderate CPU performance. M3 types offer larger number of virtual CPUs that provide higher performance. We recommend you use M3 instances if you need general-purpose instances with demanding CPU requirements. This family includes the C1 and CC2 instance types, and is geared towards applications that benefit from high compute power. Compute-optimized VM types have a higher ratio of virtual CPUs to memory than other families but share the NCs with non optimized ones. We recommend this type if you are running any CPU-bound scale-out applications. CC2 instances provide high core count (32 virtual CPUs) and support for cluster networking. C1 instances are available in smaller sizes and are ideal for scaled-out applications at massive scale. This family includes the CR1 and M2 VM types and is designed for memory-intensive applications. We recommend these VM types for performance-sensitive database, where your application is memory-bound. CR1 VM types provide more memory and faster CPU than do M2 types. CR1 instances also support cluster networking for bandwidth intensive applications. M2 types are available in smaller sizes, and are an excellent option for many memory-bound applications. This Micro family contains the T1 VM type. The t1.micro provides a small amount of consistent CPU resources and allows you to increase CPU capacity in short bursts when additional cycles are available. We recommend this type for lower throughput applications like a proxy server or administrative applications, or for low-traffic websites that occasionally require additional compute cycles. We do not recommend this VM type for applications that require sustained CPU performance. The following tables list each VM type Eucalyptus offers. Each type is listed in its associate VM family.
https://docs.eucalyptus.cloud/eucalyptus/5/user_guide/using_instances/understanding_instances/instances_types/vm_types/
2021-01-16T03:33:13
CC-MAIN-2021-04
1610703499999.6
[]
docs.eucalyptus.cloud
What “group by” icon on the top right corner of your screen!!
https://docs.clickup.com/en/articles/1116076-priorities
2021-01-16T03:23:37
CC-MAIN-2021-04
1610703499999.6
[]
docs.clickup.com
You can modify metrics timing and reporting defaults.When using the default CloudWatch properties, metrics reporting can take around 15 minutes: The above workflow is sequential and cumulative. The sensor data point timing values can be shortened by changing variables in the CLC. These changes will increase network traffic as polling will be done more frequently. Modify the default polling interval CLC variable to a number less than 5. cloud.monitor.default_poll_interval_mins This is how often the CLC sends a request to the CC for sensor data. Default value is 5 minutes. Modify the history size CLC variable to a number less than 5. cloud.monitor.history_size This is how many data value samples are sent in each sensor data request. The default value is 5. The frequency requests is either 1 minute (if the cloud.monitor.default_poll_interval_mins is 1 minute) or half the value of cloud.monitor.default_poll_interval_mins if that value is greater). So by default, with a cloud.monitor.default_poll_interval_mins of 5 minutes and cloud.monitor.history_size size of 5, every 5 minutes the CLC asks for the last 5 data points from the CC, which should be timed for every 2.5 minutes (e.g., 2.5 minutes ago, 5 minutes ago, 7.5 minutes ago, and 10 minutes ago). These values may be skewed a bit based on the time the CC uses.
https://docs.eucalyptus.cloud/eucalyptus/5/user_guide/using_monitoring/monitoring_tasks/monitoring_viewing/metrics_modify_defaults/
2021-01-16T02:47:35
CC-MAIN-2021-04
1610703499999.6
[]
docs.eucalyptus.cloud
Contact Support For contact information, see the main Support contact page. For detailed information about working with Splunk Support, see Working with Support and the Support Portal. and in "Anonymize data samples to send to Support" in this manual, but we cannot guarantee compliance with your particular security policy. Splunk. # ulimit -c unlimited # splunk restart This setting only affects the processes you start from the shell where you ran the ulimit command. To find out where core files land in your particular UNIX flavor and version, consult the system documentation. The below text includes some general rules that may or may not apply. On UNIX, if you start Splunk with the --nodaemon option ( splunk start --nodaemon), it may write the core file to the current directory. Without the flag the expected location is / (the root of the filesystem tree). However, various platforms have various rules about where core files go with or without this setting. Consult your system documentation. If you do start splunk with --nodaemon, you will need to, in another shell, start the web interface manually with splunk start splunkweb. Depending on your system, the core may be named something like core.1234, where '1234' is the process ID of the crashing program. is!
https://docs.splunk.com/Documentation/Splunk/7.0.8/Troubleshooting/ContactSplunkSupport
2021-01-16T03:22:09
CC-MAIN-2021-04
1610703499999.6
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
ZScript Interfaces Max Detail for UV Map This example plugin will give you an estimate of the amount of detail that your UVs and map size will be able to capture. If you are wanting to export displacement maps, normal maps or texture maps created from polypaint then knowing how much detail they can hold is very useful. The plugin works by creating a texture map. The black areas of the map will be the part unused by the UVs, so by using the map to mask a plane, the amount of the map that is used can be calculated. This can then be used to estimate the maximum number of polygons a map of a given size can support. Download the code for Max Detail here. Focal Shift & Z Intensity adjustment This plugin reproduces some of the same functionality that is in the <<Brush and Brush>> buttons located in the ZPlugin>MIsc Utilities sub-palette. Those buttons allow the Draw Size to be adjusted by the [ and ] hotkeys and the increment can be set in the slider. This example will add buttons for adjusting Focal Shift and Z Intensity, along with sliders for setting the increment for each. The code demonstrates the use of [VarSave] for saving and loading a variable value from a file. This means that the increment slider value is kept between ZBrush sessions. Download the code for Focal Shift & Z Intensity adjustment here.
http://docs.pixologic.com/user-guide/customizing-zbrush/zscripting/sample-zscripts/
2019-10-14T00:02:56
CC-MAIN-2019-43
1570986648343.8
[]
docs.pixologic.com
Yinghui Guan 16 September 2019 Rencontres du Vietnam - Behind and Beyond the Standard Model at the LHC, Future Colliders and Elsewhere Abstract: The Belle II experiment at the SuperKEKB energy-asymmetric $e^+ e^-$ collider is a substantial upgrade of the B-factory facility at the Japanese KEK laboratory. The design luminosity of the machine is $8 \times 10^{35}$~cm$^{-2}$s$^{-1}$ and the Belle II experiment aims to record 50~ab$^{-1}$ of data, a factor of 50 more than Belle. Regular operations with the full Belle II detector have successfully started in 2019. We will review the prospects of flavor physics program of the Belle II experiment. Keyword(s): flavor physics ; physics prospects ; Belle II Note: 25 + 5 min The record appears in these collections: Talks > Talk Drafts (Internal) Public Talks
https://docs.belle2.org/record/1700?ln=en
2019-10-13T22:56:02
CC-MAIN-2019-43
1570986648343.8
[]
docs.belle2.org
Troubleshooting - My old image is displayed for a second - My video autoplays but sound is not working - How to get rid of the black bands of my video? - How to avoid a video showing black bands - The video takes a few seconds to load - Is it possible to make the video partially transparent? - Does the Turbo theme performance mode affect the app? - Is the app script file optimised? - What is the Shopify storefront password? - How can I create a staff account for you? - Will I be double-charged after reinstalling? - Will Video Background affect my SEO negatively? - I've been charged before the trial expiration - How can you access the app preferences without my login details? - I think I've been charged during the trial period - How to create a preview link when the store is not publicly available - Does Video Background modify my theme templates? - My app preferences are not updated
https://docs.codeblackbelt.com/category/456-troubleshooting
2019-10-13T22:34:01
CC-MAIN-2019-43
1570986648343.8
[]
docs.codeblackbelt.com
Use indexer discovery to connect forwarders to peer nodes Indexer discovery streamlines the process of connecting forwarders to peer nodes in indexer clusters. It simplifies the set-up and maintenance of an indexer cluster. See Advantages of the indexer discovery method. Indexer discovery is available only for forwarding to indexer clusters. Each forwarder queries the master node for a list of all peer nodes in the cluster. It then uses load balancing to forward data to the set of peer nodes. In the case of a multisite cluster, a forwarder can query the master for a list of all peers on a single site. How indexer discovery works Briefly, the process works like this: 1. The peer nodes provide the master with information on their receiving ports. 2. The forwarders poll the master at regular intervals for the list of available peer nodes. You can adjust this interval. See Adjust the frequency of polling. 3. The master transmits the peer nodes' URIs and receiving ports to the forwarders. 4. The forwarders send data to the set of nodes provided by the master. In this way, the forwarders stay current with the state of the cluster, learning of any peers that have joined or left the cluster and updating their set of receiving peers accordingly. In the case of a multisite cluster, each forwarder can identify itself as a member of a site. In that case, the master transmits a list of all peer nodes for that site only, and the forwarder limits itself to load balancing across that site. See Use indexer discovery in a multisite cluster. In addition, the peer nodes can use weighted load balancing to adjust the amount of data they send to each peer based on that peer's relative disk capacity. See Use weighted load balancing. Note: If the master goes down, the forwarders will rely on their most recent list of available peer nodes. However, the list does not persist through a forwarder restart. Therefore, if a forwarder restarts while the master is down, it will not have a list of peer nodes and will not be able to forward data, resulting in potential data loss. Similarly, if a forwarder starts up for the first time, it must wait for the master to return before it can get a list of peers. Configure indexer discovery These are the main steps for setting up connections between forwarders and peer nodes, using indexer discovery: 1. Configure the peer nodes to receive data from forwarders. 2. Configure the master node to enable indexer discovery. 3. Configure the forwarders. After you set up the connection, you must configure the data inputs on the forwarders. See Configure the data inputs to each forwarder. 1. Configure the peer nodes to receive data from forwarders In order for a peer to receive data from forwarders, you must configure the peer's receiving port. One way to specify the receiving port is to edit the peer's inputs.conf file. For example, this setting in inputs.conf sets the receiving port to 9997: [splunktcp://9997] disabled = 0 Restart the peer node after making the change. See Enable a receiver in the Forwarding Data manual. Caution: When using indexer discovery, each peer node can have only a single configured receiving port. The port can be configured for either splunktcp or splunktcp-ssl, but not for both. You must use the same method for all peer nodes in the cluster: splunktcp or splunktcp-ssl., see Update common peer configurations and apps. When forwarding to a multisite cluster, you can configure the forwarder to send data only to peers in a specified site. See Use indexer discovery in a multisite cluster. 2. Configure the master node to enable indexer discovery In server.conf on the master, add this stanza: [indexer_discovery] pass4SymmKey = <string> polling_rate = <integer> indexerWeightByDiskCapacity = <bool> Note the following: - The pass4SymmKeyattribute specifies the security key used with communication between the cluster master and the forwarders. Its value must be the same for all forwarders and the master node. The pass4SymmKeyattribute used for indexer_discovery should have a different value from the pass4SymmKeyattribute used for communication between the master and the cluster nodes, which is set in the [clustering]stanza, as described in Configure the security key. - The polling_rateattribute (optional) provides a means to adjust the rate at which the forwarders poll the master for the latest list of peer nodes. Its value must be an integer between 1 and 10. The default is 10. See Adjust the frequency of polling. - The indexerWeightByDiskCapacityattribute (optional) determines whether indexer discovery uses weighted load balancing. The default is false. See Use weighted load balancing. 3. Configure the forwarders a. Configure the forwarders to use indexer discovery On each forwarder, add these settings to the outputs.conf file: [indexer_discovery:<name>] pass4SymmKey = <string> master_uri = <uri> [tcpout:<target_group>] indexerDiscovery = <name> [tcpout] defaultGroup = <target_group> Note the following: - In the [indexer_discovery:<name>]stanza, the <name>references the <name>set in the indexerDiscoveryattribute in the [tcpout:<target_group>]stanza. - The pass4SymmKeyattribute specifies the security key used with communication between the master and the forwarders. Its value must be the same for all forwarders and the master node. You must explicitly set this value for each forwarder. - The <master_uri>is the URI and management port for the master node. For example: "". - In the [tcpout:<target_group>]stanza, set the indexerDiscoveryattribute, instead of the serverattribute that you would use to specify the receiving peer nodes if you were not enabling indexer discovery. With indexer discovery, the forwarders get their list of receiving peer nodes from the master, not from the serverattribute. If both attributes are set, indexerDiscoverytakes precedence. b. Enable indexer acknowledgment for each forwarder Note: This step is required to ensure end-to-end data fidelity. If that is not a requirement for your deployment, you can skip this step. To ensure that the cluster receives and indexes all incoming data, you must turn on indexer acknowledgment for each forwarder. To configure indexer acknowledgment, set the useACK attribute in each forwarder's outputs.conf, in the same stanza where you set the indexerDiscovery attribute: [tcpout:<target_group>] indexerDiscovery = <name> useACK=true For detailed information on configuring indexer acknowledgment, read Protect against loss of in-flight data in the Forwarding Data manual. Example In this example: - The master node enables indexer discovery. - The master and forwarders share a security key. - Forwarders will send data to peer nodes weighted by the total disk capacity of the peer nodes' disks. - The forwarders use indexer acknowledgment to ensure end-to-end fidelity of data. Use indexer discovery in a multisite cluster In multisite clustering, the cluster is partitioned into sites, typically based on the location of the cluster nodes. See Multisite indexer clusters. When using indexer discovery with multisite clustering, you can configure each forwarder to be site-aware, so that it forwards data to peer nodes only on a single specified site. When you use indexer discovery with multisite clustering, you must assign a site-id to all forwarders, whether or not you want the forwarders to be site-aware.: - If you want a forwarder to be site-aware, you assign it a site-idfor a site in the cluster, such as "site1," "site2," and so on. - If you do not want a forwarder to be site-aware, you assign it the special site-idof "site0". When a forwarder is assigned "site0", it will forward to peers across all sites in the cluster. Assign a site-id to each forwarder To assign a site-id, add this stanza to the forwarder's server.conf file: [general] site = <site-id> Note the following: - You must assign a <site-id>to each forwarder sending data to a multisite cluster. This must either be a valid site in the cluster or the special value "site0". - If you want the forwarder to send data only to peers at a specific site, assign the id for that site, such as "site1." - If you want the forwarder to send data to all peers, across all sites, assign a value of "site0". - If you do not assign any id, the forwarder will not send data to any peer nodes. - See also Site values. Configure the forwarder site failover capability If you assign a forwarder to a specific site and that site goes down, the forwarder, by default, will not fail over to another site. Instead, it will stop forwarding data if there are no peers available on its assigned site. To avoid this issue, you must configure the forwarder site failover capability. To configure the forwarder site failover capability, set the forwarder_site_failover attribute in the master node's server.conf file. For example: [clustering] forwarder_site_failover = site1:site2, site2:site3 This example configures failover sites for site1 and site2. If site1 fails, all forwarders configured to send data to peers on site1 will instead send data to peers on site2. Similarly, if site2 fails, all forwarders explicitly configured to send data to peers on site2 will instead send data to peers on site3. Note: The failover capability does not relay from site to site. In other words, in the previous example, if a forwarder is set to site1 and site1 goes down, the forwarder will then start forwarding to peers on site2. However, if site2 subsequently goes down, the site1 forwarder will not then failover to site3. Only forwarders explicitly set to site2 will failover to site3. Each forwarder can have only a single failover site. The forwarders revert to their assigned site, as soon as any peer on that site returns to the cluster. For example, assume that the master includes this configuration: [clustering] forwarder_site_failover = site1:site2 When site1 goes down, such that there are no peers running on site1, the forwarders assigned to site1 start sending data to peers on site2 instead. This failover condition continues until a site1 peer returns to the cluster. At that point, the forwarders assigned to site1 start forwarding to that peer. They no longer forward to peers on site2. Use weighted load balancing When you enable indexer discovery, the forwarders always stream the incoming data across the set of peer nodes, using load balancing to switch the data stream from node to node. This operates in a similar way to how forwarders without indexer discovery use load balancing, but with some key differences. In particular, you can enable weighted load balancing. In weighted load balancing, the forwarders take each peer's disk capacity into account when they load balance the data. For example, a peer with a 400GB disk receives approximately twice the data of a peer with a 200GB disk. Important: The disk capacity refers to the total amount of local disk space on the peer, not the amount of free space. How weighted load balancing works Weighted load balancing behaves similarly to normal forwarder load balancing. The autoLBFrequency attribute in the forwarder's outputs.conf file still determines how often the data stream switches to a different indexer. However, when the forwarder selects the next indexer, it does so based on the relative disk capacities. The selection itself is random but weighted towards indexers with larger disk capacities. In other words, the forwarder uses weighted picking. So, if the forwarder has an autoLBFrequency set to 60, then every sixty seconds, the forwarder switches the data stream to a new indexer. If the load balancing is taking place across two indexers, one with a 500GB disk and the other with a 100GB disk, the indexer with the larger disk is five times as likely to be picked at each switching point. The overall traffic sent to each indexer is based this ratio: indexer_disk_capacity/total_disk_capacity_of_indexers_combined For a general discussion of load balancing in indexer clusters, see How load balancing works. Enable weighted load balancing The indexerWeightByDiskCapacity attribute in the master node's server.conf file controls weighted load balancing: [indexer_discovery] indexerWeightByDiskCapacity = <bool> Note the following: - The indexerWeightByDiskCapacityattribute is set to false by default. To enable weighted load balancing, you must set it to true. Change the advertised disk capacity for an indexer In some cases, you might want weighted load balancing to treat an indexer as though it has a lower disk capacity than it actually has. You can use the advertised_disk_capacity attribute to accomplish this. For example, if you set that attribute to 50 (signifiying 50%) on an indexer with a 500GB disk, weighted load balancing will proceed as though the actual disk capacity was 250GB. You set the advertised_disk_capacity attribute in the indexer's server.conf file: [clustering] advertised_disk_capacity = <integer> Note the following: - The advertised_disk_capacityattribute indicates the percentage that will be applied to the indexer's actual disk capacity before it sends the capacity to the master. For example, if set to 50 on an indexer with a 500GB disk, the indexer tells the master that the disk capacity is 250GB. - The value can vary from 10 to 100. - The default is 100. Adjust the frequency of polling Forwarders poll the master at regular intervals to receive the most recent list of peers. In this way, they become aware of any changes to the set of available peers and can modify their forwarding accordingly. You can adjust the rate of polling. The frequency of polling is based on the number of forwarders and the value of the polling_rate attribute, configured in the master's server.conf file. The polling interval for each forwarder follows this formula: (number_of_forwarders/polling_rate + 30 seconds) * 1000 = polling interval, in milliseconds Here are some examples: # 100 forwarders, with the default polling_rate of 10 (100/10 + 30) * 1000 = 40,000 ms., or 40 seconds # 10,000 forwarders, with the default polling_rate of 10 (10000/10 + 30) * 1000 = 1,030,000 ms., or 1030 seconds, or about 17 minutes # 10,000 forwarders, with the minimum polling_rate of 1 (10000/1 + 30) * 1000 = 10,030,000 ms., or 10,030 seconds, or a bit under three hours To configure polling_rate, add the attribute to the [indexer_discovery] stanza in server.conf on the master: [indexer_discovery] polling_rate = <integer> Note the following: - The polling_rateattribute must be an integer between 1 and 10. - The default is 10. Configure indexer discovery with SSL You can configure indexer discovery with SSL. The process is nearly the same as configuring without SSL, with just a few additions and changes: 1. Configure the peer nodes to receive data from forwarders over SSL. 2. Configure the master node to enable indexer discovery. 3. Configure the forwarders for SSL. The steps below provide basic configuration information only, focusing on the differences when configuring for SSL. For full details on indexer discovery configuration, see Configure indexer discovery. 1. Configure the peer nodes to receive data from forwarders over SSL Edit each peer's inputs.conf file to specify the receiving port and to configure the necessary SSL settings: [splunktcp-ssl://9997] disabled = 0 [SSL] serverCert = <path to server certificate> sslPassword = <certificate password> # Set rootCA in inputs.conf only if sslRootCAPath is not set in the peer's server.conf rootCA = <path to certificate authority list> Note: When using indexer discovery, each peer node can have only a single receiving port. For SSL, you must configure a port for splunktcp-ssl only. Do not configure a splunktcp stanza. 2. Configure the master node to enable indexer discovery In server.conf on the master, add this stanza: [indexer_discovery] pass4SymmKey = <string> polling_rate = <integer> indexerWeightByDiskCapacity = <bool> This is the same as for configuring a non-SSL set-up. 3. Configure the forwarders for SSL On each forwarder, add these settings to the outputs.conf file: [indexer_discovery:<name>] pass4SymmKey = <string> master_uri = <uri> [tcpout:<target_group>] indexerDiscovery = <name> useACK = true clientCert = <path to client certificate> sslPassword = <CAcert password> # Set sslRootCAPath in outputs.conf only if sslRootCAPath is not set in the forwarder's server.conf sslRootCAPath = < path to root certificate authority file> [tcpout] defaultGroup = <target_group>!
https://docs.splunk.com/Documentation/Splunk/7.2.1/Indexer/indexerdiscovery
2019-10-13T23:34:50
CC-MAIN-2019-43
1570986648343.8
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Request Handling (Middlewares)¶ TYPO3 CMS has implemented PSR-15 for handling incoming HTTP requests. The implementation within TYPO3 is often called “Middlewares”, as PSR-15 consists of two interfaces where one is called Middleware. Basic concept¶ The most important information is available at and where the standard itself is explained. The idea is to use PSR-7 Request and Response as a base, and wrap the execution with middlewares which implement PSR-15. PSR-15 will receive the incoming request and return the created response. Within PSR-15 multiple request handlers and middlewares can be executed. Each of them can adjust the request and response. TYPO3 implementation¶ TYPO3 has implemented the PSR-15 approach in the following way: Figure 1-1: Application flow - TYPO3 will create a PSR-7 request. - TYPO3 will collect and sort all configured PSR-15 middlewares. - TYPO3 will convert all middlewares to PSR-15 request handlers. - TYPO3 will call the first middleware with request and the next middleware. - Each middleware is processed, see request-handling-middleware. - In the end each middleware has to return a PSR-7 response. - This response is passed back to the execution flow. Middlewares¶ Each middleware has to implement the PSR-15 MiddlewareInterface:; } By doing so, the middleware can do one or multiple of the following: - Adjust the incoming request, e.g. add further information. - Create and return a PSR-7 response. - Call next request handler (which again can be a middleware). - Adjust response received from the next request handler. Middleware examples¶ The following list shows typical use cases for middlewares. Returning a custom response¶ This middleware will check whether TYPO3 is in maintenance mode and will return an unavailable response in that case. Otherwise the next middleware will be called, and its response is returned instead. public function process( ServerRequestInterface $request, RequestHandlerInterface $handler ): ResponseInterface { if (/* if logic */) { return GeneralUtility::makeInstance(ErrorController::class) ->unavailableAction( $request, 'This page is temporarily unavailable.' ); } return $handler->handle($request); } Enriching the request¶ The current request can be extended with further information, e.g. the current resolved site and language could be attached to the request. In order to do so, a new request is built with additional attributes, before calling the next request handler with the enhanced request. public function process( ServerRequestInterface $request, RequestHandlerInterface $handler ): ResponseInterface { $routeResult = $this->matcher->matchRequest($request); $request = $request->withAttribute('site', $routeResult->getSite()); $request = $request->withAttribute('language', $routeResult->getLanguage()); return $handler->handle($request); } Enriching the response¶ This middleware will check the length of generated output, and add a header with this information to the response. In order to do so, the next request handler is called. It will return the generated response, which can be enriched before it gets returned. public function process( ServerRequestInterface $request, RequestHandlerInterface $handler ): ResponseInterface { $response = $handler->handle($request); if (/* if logic */) { $response = $response->withHeader( 'Content-Length', (string)$response->getBody()->getSize() ); } return $response; } Configuring middlewares¶ In order to implement a custom middleware, this middleware has to be configured. TYPO3 already provides some middlewares out of the box. Beside adding your own middlewares, it’s also possible to remove existing middlewares from the configuration. The configuration is provided within Configuration/RequestMiddlewares.php of an extension: return [ 'frontend' => [ 'middleware-identifier' => [ 'target' => \Vendor\ExtName\Middleware\ConcreteClass::class, 'before' => [ 'another-middleware-identifier', ], 'after' => [ 'yet-another-middleware-identifier', ], ], ], 'backend' => [ 'middleware-identifier' => [ 'target' => \Vendor\ExtName\Middleware\AnotherConcreteClass::class, 'before' => [ 'another-middleware-identifier', ], 'after' => [ 'yet-another-middleware-identifier', ], ], ], ]; TYPO3 has multiple stacks where one middleware might only be necessary in one of them. Therefore the configuration defines the context on its first level to define the context. Within each context the middleware is registered as new subsection with an unique identifier as key. The default stacks are: frontend and backend. Each middleware consists of the following options: - target PHP string FQCN (=Fully Qualified Class Name) to use as middleware. - before PHP Array List of middleware identifiers. The middleware itself is executed before any other middleware within this array. - after PHP Array List of middleware identifiers. The middleware itself is executed after any other middleware within this array. - disabled PHP boolean Allows to disable specific middlewares.
https://docs.typo3.org/m/typo3/reference-coreapi/master/en-us/ApiOverview/RequestHandling/Index.html
2019-10-14T00:06:57
CC-MAIN-2019-43
1570986648343.8
[]
docs.typo3.org
Queue Manager API¶ This page documents all valid options for the YAML file inputs to the config manager. This first section outlines each of the headers (top level objects) and a description for each one. The final file will look like the following: common: option_1: value_for1 another_opt: 42 server: option_for_server: "some string" This is the complete set of options, auto-generated from the parser itself, so it should be accurate for the given release. If you are using a developmental build or want to see the schema yourself, you can run the qcfractal-manager --schema command and it will display the whole schema for the YAML input. Each section below here is summarized the same way, showing all the options for that YAML header in the form of their pydantic API which the YAML is fed into in a one-to-one match of options.
https://qcfractal.readthedocs.io/en/stable/managers_config_api.html
2019-10-13T22:49:43
CC-MAIN-2019-43
1570986648343.8
[]
qcfractal.readthedocs.io
How to enable or disable https. This option allow you to enable or disable https at any giving time. To do this, watch the video below: OR Follow the steps below: - Click on the picture at the top right side of the page - Click on Settings - Click on “Site Settings” - Click on “System Controls”. - Click Enable to “HTTPS” - Click Save Settings to save your settings Thanks for Reading
http://docs.crea8social.com/docs/site-management/how-to-enable-disable-https/
2019-10-13T23:39:41
CC-MAIN-2019-43
1570986648343.8
[array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-17.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-27-2.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-28-1.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-29-1.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-30-1.png', None], dtype=object) ]
docs.crea8social.com
File:export daz3d step5.png From iPi Docs Size of this preview: 800 × 598 pixels. Other resolution: 1,024 × 766 pixels. Original file (1,024 × 766 pixels, file size: 277 KB, MIME type: image/png) File history Click on a date/time to view the file as it appeared at that time. - You cannot overwrite this file. File usage The following 4 pages link to this file:
http://docs.ipisoft.com/index.php?title=File:export_daz3d_step5.png&oldid=746
2019-10-13T22:31:08
CC-MAIN-2019-43
1570986648343.8
[array(['/images/thumb/8/89/export_daz3d_step5.png/800px-export_daz3d_step5.png', 'File:export daz3d step5.png'], dtype=object) ]
docs.ipisoft.com
Performs a batch update of the IModelLocalizationItem.Value property values for the LocalizationItem child nodes of a particular LocalizationGroup node. Namespace: DevExpress.ExpressApp.Utils Assembly: DevExpress.ExpressApp.v19.1.dll public static void SetLocalizedText( IModelLocalizationGroup node, IList<string> itemNames, IList<string> itemValues ) Public Shared Sub SetLocalizedText( node As IModelLocalizationGroup, itemNames As IList(Of String), itemValues As IList(Of String) ) If the LocalizationItem node specified by an item in the itemNames list does not exist, it is created. The Application Model has the Localization node, which allows localization of various constants. The Localization node is used to localize custom strings used in an XAF application. The node contains LocalizationGroup child nodes. Each LocalizationGroup child node contains a set of LocalizationItem child nodes. The SetLocalizedText method allows you to change values of LocalizationItems belonging to a particular LocalizationGroup. To see an example of localizing custom string constants, refer to the How to: Localize Custom String Constants help topic.
https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Utils.CaptionHelper.SetLocalizedText(IModelLocalizationGroup--IList-String---IList-String-)
2019-10-13T23:39:35
CC-MAIN-2019-43
1570986648343.8
[]
docs.devexpress.com
>>: . My last comment (obviously) applies to the Outputcsv documentation, too. Whatever is fixed here, should be copied and pasted there, also. The handling of `multi-valued fields` is undocumented and is very important; outputcsv /outputlookup both call 'nomv' on all fields before writing out the rows (merging all multi-valued fields into single-space-delimited single values). Notice the difference (change to field 'children') between the results of these 2 searches (which most would expect to be the same): |noop|stats count AS name|eval name="Gregg"|eval spouse="Cindy"|eval children="Lauren Megan Noah"|makemv children |streamstats count AS serial|eval mv_count=mvcount(children)|table serial name spouse children mv_count |outputcsv eraseme.csv |inputcsv eraseme.csv|streamstats count AS serial|eval mv_count=mvcount(children)|table serial name spouse children mv_count It would be nice to have an option to these commands to choose between 'nomv' and 'mvexpand'. Richgalloway - Thanks for noticing the issue with Example 5 and your question about the key_field argument. I fixed the example. For your question about the description of key_field, yes, this only happens with concurrent queries, one with outputlookup and one with inputlookup. It could happen that the inputlookup would happen while the outputlookup was still updating some of the records. I'll update the description with this information. The description of the key_field argument says "An outputlookup search using the key_field argument might result in a situation where the lookup table or collection is only partially updated. This means that a subsequent lookup or inputlookup search on that collection might return stale data along with new data." When would a partial update occur? Is this only a concern when outputlookup and inputlookup are simultaneous in separate searches? Example 5 does not use the key_field argument as would seem to be required to perform an update. Does key_field have a default value that is not documented? Woodcock - Thanks for noticing this. I have updated the sentence. This statement is incorrect (a copy/paste error from "inputlookup", probably, where it is true): The outputlookup command is a generating commandand should be the first command in the search. Generating commands use a leading pipe character. Woodcock You are correct, we don't preserve the internal multivalue encoding scheme for outputcsv or outputlookup. The mv fields are flattened. I will forward your request to choose between 'nomv' and 'mvexpand' to our development team.
https://docs.splunk.com/Documentation/Splunk/6.4.5/SearchReference/Outputlookup
2019-10-13T23:47:48
CC-MAIN-2019-43
1570986648343.8
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
ReplayLast The Replay Last button re-traces the last brush stroke (from mouse/pen click to release), regardless whether it was created using the same tool or settings. ReplayLastRel The Replay Last Relative button will replay the last brush stroke at the new cursor position, as long as the mesh has not been rotated. The hotkey Shift+1 must be used as the start is dependent on the cursor position. (The button is provided so a new hotkey can be assigned if desired.) Directional Directional Brush Stroke specifies that continuous brush strokes are only applied while traveling away from the point of first click. Spacing The Spacing slider determines how many instances of the current tool are added by certain brush strokes. Placement Placement Variance. Used by the Spray and Colorized Spray strokes, the Placement slider determines the maximum distance each random dot strays from the center of the cursor drag. Dot placement is also governed by the Draw Size. Changing this value also affects the relative size of each dot drawn. Scale Scale Variance. If the selected stroke is a Spray or Colorized Spray stroke, the Scale slider determines the maximum variance in dot size. If this value is 0, all dots are drawn at the same size. If the selected stroke is a DrawRectangle stroke: normally when drawing 3D objects, a click+drag outward defines a certain overall size, then a drag inward again shrinks only the X and Y axes. Setting this slider to 0 causes all three axes to grow and shrink equally while drawing. Color Color Intensity Variance. Used by the Spray and Colorized Spray strokes, the Color slider determines the maximum variance in color (for Colorized Spray stroke) or color intensity (Spray stroke). If this value is 0, all dots are drawn with the same color. Flow Flow Variance. Used by the Spray and Colorized Spray strokes, the Flow slider determines the density of dots drawn. Smaller values result in fewer dots, larger values result in more dots. M Repeat Main Repeat Count. For strokes which apply multiple instances of the tool in a pattern (such as Radial and Grid strokes), the Main Repeat Count slider determines the number of instances applied. For the Radial stroke, this slider determines the number of instances placed around the circle; for the Grid stroke, this slider determines the number of columns in the grid. S Repeat Secondary Repeat Count. For strokes which apply multiple instances of the tool in a pattern (such as the Grid stroke), the Secondary Repeat Count slider provides the secondary count of instances applied, if one is needed. For the Grid stroke, this slider determines the number of rows in the grid. Square Keep Square. Turn on the Square button to keep the texture/alpha/selection/or masking a perfect ratio as it is drawn onto the surface. Center Drag From Center. Turn on the Center button to draw out the texture/alpha/selection/or masking starting at the center of the brush stoke. Roll Roll mode. Press Roll to tile your currently selected alpha in your brush stroke. This is very useful if your alpha is already tileable. Roll Dist The Roll Distance slider will adjust the roll of an alpha or texture to stretch out to a larger distance. If the slider is set higher then ZBrush will apply the texture/alpha to cover a further distance along the surface.
http://docs.pixologic.com/reference-guide/stroke/modifiers/
2019-10-13T23:38:24
CC-MAIN-2019-43
1570986648343.8
[]
docs.pixologic.com
. require_http_methods(request_method_list_GET()¶ Decorator to require that a view only accepts the GET method. require_POST()¶ Decorator to require that a view only accepts the POST method.. condition(etag_func=None, last_modified_func=None)¶ etag(etag_func)¶ last_modified(last_modified_func)¶ These decorators can be used to generate ETag and Last-Modified headers; see conditional view processing. The decorators in django.views.decorators.gzip control content compression on a per-view basis. gzip_page()¶. vary_on_headers(*headers)¶ The Vary header defines which request headers a cache mechanism should take into account when building its cache key. See using vary headers. The decorators in django.views.decorators.cache control server and client-side caching. cache_control(**kwargs)¶ This decorator patches the response’s Cache-Control header by adding all of the keyword arguments to it. See patch_cache_control() for the details of the transformation. never_cache(view_func)¶ This decorator adds a Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private header to a response to indicate that a page should never be cached. private directive was added.
https://django.readthedocs.io/en/latest/topics/http/decorators.html
2019-10-13T23:55:11
CC-MAIN-2019-43
1570986648343.8
[]
django.readthedocs.io
Umberto Tamponi 19 September 2019 LC2019 and main operation of SuperKEKB has started in March 2019: first results on approx. 10fb−1 of data are expected by the end of June.. Note: 20 + 5 min The record appears in these collections: Talks > Talk Drafts (Internal) Public Talks
https://docs.belle2.org/record/1702?ln=en
2019-10-13T22:53:49
CC-MAIN-2019-43
1570986648343.8
[]
docs.belle2.org
Modify the Redis persistence change the modes to adapt Redis persistence to your needs. There are multiple possibilities to do so. Find below how to enable AOF as an example: Edit the configuration file installdir/redis/etc/redis.conf. Change the appendonly configuration directive from no to yes: appendonly yes Edit the Redis configuration file installdir.
https://docs.bitnami.com/installer/apps/pootle/troubleshooting/change-persistence-mode/
2019-10-14T00:03:16
CC-MAIN-2019-43
1570986648343.8
[]
docs.bitnami.com
- Understanding Crawling Performance The crawling performance is defined by the number of items that a given connector processes per hour. Many factors influence the crawling speed, a small number of processed items is not necessarily a sign of slow crawling. Transferred Data Available Bandwidth (Mostly relevant for Crawling Module and on-premises sources) Depending on your configurations, the network bandwidth available to the server hosting the connectors or targeted system can limit the crawling performance. If the connector is downloading large amounts of data, the bottleneck to performance could be the available bandwidth. The same issue can occur when the crawler is physically far from the server hosting the data it is indexing. On a Windows server, if you want to know how much of the server bandwidth you are using, look at the Network section in Task Manager (Task Manager > Performance tab > Network section). The network bandwidth could be lower when the connector is in New York and the server being indexed is in Los Angeles. Solutions Improve your internet connection. Host the connector closer to the data source or vice versa. Size of Items Some repositories contain huge items that the connector has to download. If the connector retrieves videos, huge PDFs. and other files weighing several MBs, the connector will not crawl items efficiently, because all its time will be spent downloading these files. In such case, you can expect much smaller numbers of processed items per hour. Solutions Evaluate if all items the connector is downloading are desirable and useful in the index. You should be adjusting content types to filter out these items or index them by reference (see Change Indexed Item Types and Indexing by Reference). Videos and images are probably not useful in the index unless you have particular use cases. If you still have available bandwidth, increasing the number of refresh threads will allow more simultaneous downloads (see Number of Refresh Threads). You can monitor the ongoing total size of downloaded items in the Activity panel of the administration console (see Review Events Related to Specific Coveo Cloud Administration Console Resources). Server Performance Server Responsiveness and Latency Connectors are crawling live systems that are installed and maintained by a variety of different companies. Two SharePoint 2019 servers installed at two different locations will most likely not offer the same performance. The following factors can influence server performance: If the server is not responsive, connector calls will take much more time to execute, thus significantly decreasing the crawling speed. Load If a high number of users are using the system while the connector crawls it, the system performance can be impacted. The server can also be under high load when it contains more data than recommended for its infrastructure. (Only for on-premises servers) Infrastructure Most systems are recommended to be installed on a dedicated and complete Windows servers or virtual machines. Solutions - Validate your server infrastructure. (for Coveo On-Premises Crawling Module servers only) Ensure your server meets the hardware and software requirements (see Requirements). - Create source schedules to crawl outside of peak hours (see Edit a Source Schedule). API Performance The connector performance also heavily relies on the APIs performance of the systems it crawls, regardless of your server configuration. Certain APIs have limited capacity, and that limitation impacts the number of items per hour the related connectors can process. Solution There is no direct solution; the only thing you can do is to decrease the other factors affecting the crawling performance to lessen the impact of the system APIs. Throttling Cloud services and even installed servers have implemented throttling to protect their APIs. The effect of throttling on the connectors is drastic and there are no solutions to bypass it. Throttling means that some requests of the connectors are refused or slowed down. Exchange Online has strict throttling policies that are not configurable, and this is one of the reasons why Coveo does not recommend crawling emails. The amount of email content is huge, and keeps increasing, but the crawling performance is poor due to Exchange throttling policies. Solution Some systems have throttling policies that you can modify to allow Coveo connectors to bypass them. Contact Coveo Support for more information. The Web connector self-throttles to one request per second to follow internet politeness norms. If you want to crawl your own web site and do not mind increasing the load, you can unlock the crawling speed (see the Crawling Limit Rate parameter). Frequency of Errors Errors are also a source of slowness. When the connector hits an error, the connector identifies whether it can retry the failed operation. If the connector can retry, it usually waits a few seconds, and then tries the operation again to get all the content requested by the source configuration. However, because of all the factors mentioned in the Server Performance section as well as limiting system APIs, a high number of errors can lead to a lot of “waiting for retry” time, which ultimately impacts the crawling performance. Solution Depending on the type of errors: When the errors persist or are internal to the Coveo Cloud platform: contact Coveo Support. Otherwise, follow the instructions provided with error codes to fix errors in source configurations or server-side. Source Configuration The configuration of the source can also have a huge impact on the total crawling time as well as the crawling speed. Number of Refresh Threads Most connectors (sources) have a default value of 2 refresh threads. This value prevents the connector from being throttled. However, if you are crawling a system that can take more load (e.g., powerful infrastructure or light load), you can consider increasing this parameter value in order to increase performance. Add the NumberOfRefreshThreads hidden parameter in the source JSON configuration, and incrementally increase its value up to 8 threads, which is the value usually providing the most significant performance benefits (see Source JSON Modification Examples). If you increase the load on the server, this could lead to throttling and have an adverse effect. Thus, monitor your change to ensure its impact is advised. Number of Sources Similar to the number of threads, the number of concurrent sources targeting a system is also a good way to increase performance. If the content you want to index can be split into multiple sources, having multiple sources crawling at the same time will multiply total performance. However, it will also multiply the server load, throttling, etc. There are also added benefits to configure source schedules with more granularity. A given portion of your content requires a higher frequency of refresh, or some other content almost never changes, which requires to be refreshed less often. Enabled Options Some connectors have content options that can impact performance. If you do not need certain content types, do not enable the related options. Most source configurations allow you to Retrieve Comments (and other similar options), enabling these features can have a cost in terms of performances because the connector can have to perform additional API calls to retrieve this additional content. Differentiating Crawling Operations Once the source is created, you can update the source with different operations (see Refresh VS Rescan VS Rebuild for more details). Rebuild and Rescan Rebuild and rescan are operations that attempt to retrieve all the content requested by the source configuration. The difference is that rebuilds takes significantly more time to execute. Rebuilding a source without a good reason since it takes longer than the rescan, which takes longer than the refresh. Refresh Refresh is the fastest operation, meant to retrieve only the delta of item changes since the last source operation. It is recommended to schedule this operation at the highest frequency when possible and supported by the crawled systems. For sources that do not fully support refresh operations, a scheduled rescan is recommended weekly. Prepare Your Expectations If you are planning to configure a big source, it is important to prepare your expectations. Rule of thumb the higher number of items to index, the higher the time the initial build will take. Still Too Slow? If you feel the performances you are getting are still too slow, you can consider opening a maintenance case so the Coveo Cloud team can investigate your source configurations and Coveo On-Premises Crawling Module servers. Ensure to include the actions that you took to fine-tune the crawling performance, since the first steps of investigation are listed in this article.
https://docs.coveo.com/en/2078/
2019-10-13T23:45:18
CC-MAIN-2019-43
1570986648343.8
[]
docs.coveo.com
The Alpha palette contains a variety of grayscale images known as Alphas. These images look like nautical depth soundings used to map the ocean floor — nearer portions are lighter, more distant portions are darker. When used with painting tools, Alphas determine the shape of the brush being used. When used with 3D objects, Alphas can be used to sculpt the objects in unique ways, or as displacement maps. You can add Alphas to this palette by importing images from disk files, or by grabbing depth information from the canvas (using the MRGBZGrabber Tool). You can export any Alpha as an image file, in a variety of formats. Unlike standard 8-bit grayscale images which contain 256 gray levels, ZBrush-generated Alphas are 16-bit images which contain over 65,000 gray levels. Alphas can also be converted to Stencils, Textures, or even 3D meshes. They can also be modified using the Alpha Adjust curve. Import The Import button loads an image from a saved file. ZBrush recognizes a number of standard image formats for import and export including .bmp (Windows Bitmap), .psd (Photoshop), .jpg (JPEG), .tif (TIFF). You can select multiple alpha images and load them all at once. If you import color images, they will automatically be converted to grayscale. Export The Export button saves the current Alpha to an image file in a variety of file formats. Alphas created within ZBrush will be 16 bit or 8 bit depending on how they were created. Ep (Export processed Alpha) If pressed, then any alpha that is exported will include the modifications made to it by the AlphaAdjust curve. (This is the same as the alpha that appears in the large thumbnail preview.) If not pressed, then any exported alpha will have its ‘original’ appearance, without modification by AlphaAdjust. Alpha selection slider Use the Alpha selection slider to select any item in this palette by number. R (Restore Configuration) As alphas are selected, they will be added to the “recently used” set of thumbnails that appears in the Alpha palette. In time, this may cause the palette to grow too large. Pressing R resets the recently used section of the palette to display the original number of thumbnails. Current Alpha and recently selected Alphas Alphas can be selected from either the Alpha palette, or the large Alpha thumbnail on the left of the ZBrush canvas. In either case, click on the large thumbnail to bring up the full selection of alphas. Within the palette, you can also click one of the small thumbnails that show recently used alphas, to select it. The inventory of alphas works the same as others in ZBrush, such as those in the Tool and Material palettes. The active alpha is grayed out to indicate that it is already selected. Note: In the Alpha Palette, click one of the small thumbnails and then select an alpha from the resulting popup of the alpha inventory, to have the selected alpha replace the clicked thumbnail, rather than be added to the list of recently used alphas. To see the name, size, and bit depth of an alpha, hover the mouse over its thumbnail. Flip H Flips the alpha left-to-right, making it a mirror-image of itself. Flip V Flips the alpha top-to-bottom, making it a mirror-image of itself. Rotate Rotates the alpha by 90 degrees clockwise. Height and width remain the same, so if the Alpha is not square, it is stretched to fit the current width and height values. Inverse Produces an inverse of the alpha so that white becomes black, darker grays become lighter, and vice-versa. Surface The Surface button mode automatically defines the best middle gray value for your alpha. It allows you to add details from the alpha to your sculpt without destroying details already on the surface. Seamless With the Seamless slider ZBrush will transform the selected alpha to a seamless pattern. A high value will make large changes to the alpha to make it seamless while a low value will make minor changes. You may need to increase or decrease this value depending on the complexity of your alpha. Most alphas require a unique setting for best results. Alpha palette sub-palettes Reference Guide > Alpha
http://docs.pixologic.com/reference-guide/alpha/
2019-10-13T23:18:29
CC-MAIN-2019-43
1570986648343.8
[]
docs.pixologic.com
ArenaParkour Documentation A Parkour system that allows you to create Parkour arenas where you can specify checkpoints and victory locations. Configurable so that players can join solo or against each other. Prizes can be handed out to the winner and many other configurable options. == Setup== - When making an arena make sure to have at least one checkpoint and one victory point. You can have as many victory points and checkpoints as you like! - To make a checkpoint you need to use your WorldEdit wand to select a region, then type the commands below in the Commands Section. ==Let everyone play solo== ParkourConfig.yml]] and paste into your ArenaParkour/ParkourConfig.ymlCopy this [[ Afterwards just do /pk reloadAfterwards just do /pk reload ==Requirements== - [[ WorldEdit]] - Just for the selecting of regions - [[BattleArena]] ==Information== - Player's are teleported to the last checkpoint by doing /pk last or when their health reaches below zero. ==Tutorial== [[ Youtube tutorial (thank you IngrownPenguin)]] ==Developer Options== Click [[ here]] to go to the Developer's Page! == Return to Index == [[Main Page]]
https://docs.battleplugins.org/docs/ext/ArenaParkour/
2019-10-13T22:25:26
CC-MAIN-2019-43
1570986648343.8
[]
docs.battleplugins.org
Adding an ad-hoc forum to a page Ad-hoc forums are useful if you want to enable users to comment on pages or articles, but don't want to create a forum for each page manually. In case you don't need a structured forum, you can also consider using Message boards for this purpose. You can create Ad-hoc forums by means of the Forum (Single forum - General) web part. If you place the Forum (Single forum - General) web part to a page and set its Forum name property to ad-hoc forum (or ad_hoc_forum if you are using ASPX templates), the system displays the new forum on the website. However, the forum will actually be created only when a visitor adds the first post to the forum. After that, the system will add the forum to the AdHoc forum group available in the Forums application. Ad-hoc forums are uniquely identified by the page. Was this page helpful?
https://docs.kentico.com/k12/community-features/forums/adding-an-ad-hoc-forum-to-a-page
2019-10-13T23:47:53
CC-MAIN-2019-43
1570986648343.8
[]
docs.kentico.com
Monitor changes to your file system The Splunk Enterprise file system change monitor tracks changes in your file system. The monitor watches a directory you specify and generates an event when that directory undergoes a change. It is completely configurable and can detect when any file on the system is edited, deleted, or added (not just Splunk-specific files). For example, you can tell the file system change monitor to watch /etc/sysconfig/ and alert you any time the system configurations change. To monitor file system changes on Windows, see Monitor file system changes in this manual to learn how with Microsoft native auditing tools. How the file system change monitor works The file system change monitor detects changes using: - modification date/time - group ID - user ID - file mode (read/write attributes, etc.) - optional SHA256 hash of file contents You can configure the following features of the file system change monitor: - whitelist using regular expressions - specify files that will be checked, no matter what - blacklist, Splunk Enterprise By default, the file system change monitor generates audit events whenever the contents of $SPLUNK_HOME/etc/ are changed, deleted, or added to. When you start Splunk Enterprise for the first time, it generates an audit event for each file in the $SPLUNK_HOME/etc/ directory and all subdirectories. Afterward, any change in configuration (regardless of origin) generates an audit event for the affected file. If you have configured signedaudit=true, Splunk Enterprise indexes the file system change into the audit index ( index=_audit). If signedaudit is not turned on, by default, Splunk Enterprise writes the events to the main index unless you specify another index. The file system change monitor does not track the user name of the account executing the change, only that a change has occurred. For user-level monitoring, consider using native operating system audit tools, which have access to this information. Caution: Do not configure the file system change monitor to monitor your root file system. This can be dangerous and time-consuming if directory recursion is enabled. Configure the file system change monitor Configure the file system change monitor in inputs.conf. There is no support for configuring the file system change monitor in Splunk Web. You must restart Splunk Enterprise any time you make changes to the [fschange] stanza. 1. Open inputs.conf. 2. Add [fschange:<directory>] stanzas to specify files or directories that Splunk Enterprise should monitor for changes. 3. Save the inputs.conf file and close it. 4. Restart Splunk Enterprise. File system change monitoring begins immediately. If you want to use this feature with forwarding, follow these guidelines: - To send the events to a remote indexer, use a heavy forwarder. - If you cannot use a heavy forwarder, then follow the configuration instructions at Use with a universal forwarder. To use the file system change monitor to watch any directory, add or edit an [fschange] stanza to inputs.conf in $SPLUNK_HOME/etc/system/local/ or your own custom application directory in $SPLUNK_HOME/etc/apps/. For information on configuration files in general, see About configuration files in the Admin manual. Syntax Here is the syntax for the [fschange] stanza: [fschange:<directory or file to monitor>] <attribute1> = <val1> <attribute2> = <val2> ... Note the following: - Splunk Enterprise monitors all adds/updates/deletes to the directory and its subdirectories. - Any change generates an event that Splunk indexes. <directory or file to monitor>defaults to $SPLUNK_HOME/etc/. Attributes All attributes are optional. Here is the list of available attributes: Define a filter To define a filter to use with the filters attribute, add a [filter...] stanza as follows: [filter:blacklist:backups] regex1 = .*bak regex2 = .*bk [filter:whitelist:code] regex1 = .*\.c regex2 = .*\.h [fschange:/etc] filters = backups,code The following list describes how Splunk Enterprise handles fschange whitelist and blacklist logic: - The events run down through the list of filters until they reach their first match. - If the first filter to match an event is a whitelist, then Splunk Enterprise indexes the event. - If the first filter to match an event is a blacklist, the filter prevents the event from getting indexed. - If an event reaches the end of the chain with no matches, then Splunk Enterprise indexes the event. This means that there is an implicit "all pass" filter built in. To default to a situation where Splunk Enterprise does not index events if they don't match a whitelist explicitly, end the chain with a blacklist that matches all remaining events. For example: ... filters = <filter1>, <filter2>, ... terminal-blacklist [filter:blacklist:terminal-blacklist] regex1 = .? If you blacklist a directory including a terminal blacklist at the end of a series of whitelists, then Splunk Enterprise blacklists all its subfolders and files, as they do not pass any whitelist. To accommodate this, whitelist all desired folders and subfolders explicitly ahead of the blacklist items in your filters. Example of explicit whitelisting and terminal blacklisting This configuration monitors files in the specified directory with the extensions .config, .xml, .properties, and .log and ignores all others. In this example, a directory could be blacklisted. If this is the case, Splunk Enterprise blacklists all of its subfolders and files as well. Only files in the specified directory would be monitored. [filter:whitelist:configs] regex1 = .*\.config regex2 = .*\.xml regex3 = .*\.properties regex4 = .*\.log [filter:blacklist:terminal-blacklist] regex1 = .? [fschange:/var/apache] index = sample recurse = true followLinks = false signedaudit = false fullEvent = true sendEventMaxSize = 1048576 delayInMills = 1000 filters = configs,terminal-blacklist Use with a universal forwarder To forward file system change monitor events from a universal forwarder, you must set signedaudit = false and index=_audit. [fschange:<directory or file to monitor>] signedaudit = false index=_audit With this workaround, Splunk Enterprise indexes file system change monitor events into the _audit index with sourcetype set to fs_notification and source set to fschangemonitor, instead of the default value of audittrail for both sourcetype and!
https://docs.splunk.com/Documentation/Splunk/6.4.1/Data/Monitorchangestoyourfilesystem
2019-10-13T23:38:51
CC-MAIN-2019-43
1570986648343.8
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
This documentation does not apply to the most recent version of Splunk. Click here for the latest version. serverclass.conf The following are the spec and example files for serverclass.conf. serverclass.conf.spec # Version 7.2.0 # # This file contains possible attributes and values for defining server # classes to which deployment clients can belong. These attributes and # values specify what content a given server class member will receive from # the deployment server. # # For examples, see serverclass.conf.example. You must reload deployment # server ("splunk reload deploy-server"), or restart splunkd, for changes to # this file to take effect. # # To learn more about configuration files (including precedence) please see # the documentation located at # #*************************************************************************** # Configure the server classes that are used by a deployment server instance. # # Server classes are essentially categories. They use filters to control # what clients they apply to, contain a set of applications, and may define # deployment server behavior for the management of those applications. The # filters can be based on DNS name, IP address, build number of client # machines, platform, and the so-called clientName. If a target machine # matches the filter, then the apps and configuration content that make up # the server class will be deployed to it. # Property Inheritance # # Stanzas in serverclass.conf go from general to more specific, in the # following order: # [global] -> [serverClass:<name>] -> [serverClass:<scname>:app:<appname>] # # Some properties defined at a general level (say [global]) can be # overridden by a more specific stanza as it applies to them. All # overridable properties are marked as such. FIRST LEVEL: global ########### # Global stanza that defines properties for all server classes. [global]). *. * Defaults to $SPLUNK_HOME/etc/deployment-apps targetRepositoryLocation = <path> * The location on the deployment client where to install the apps defined for this Deployment Server. * If this value is unset, or set to empty, the repositoryLocation path is used. * Useful only with complex (for example, tiered) deployment strategies. * Defaults to $SPLUNK_HOME/etc/apps, the live configuration directory for a Splunk instance. tmpFolder = <path> * Working folder used by deployment server. * Defaults to $SPLUNK_HOME/var/run/tmp continueMatching = true | false * Controls how configuration is layered across classes and server-specific settings. * If true, configuration lookups continue matching server classes, beyond the first match. * If false, only the first match will be used. * A serverClass can override this property and stop the matching. * Matching is done in the order in which server classes are defined. * Can be overridden at the serverClass level. * Defaults to true endpoint = <URL template string> * The endpoint from which content can be downloaded by a deployment client. The deployment client knows how to substitute values for variables in the URL. * Any custom URL can also be supplied here, as long as it uses the specified variables. * Need not be specified unless you have a very specific need, for example: To acquire deployment application files from a third-party Web server, for extremely large environments. * Can be overridden at the serverClass level. * Defaults to $deploymentServerUri$/services/streams/deployment?name=$serverClassName$:$appName$ filterType = whitelist | blacklist * The whitelist setting indicates a filtering strategy that pulls in a subset: * Items are not considered to match the stanza by default. * Items that match any whitelist entry, and do not match any blacklist entry are considered to match the stanza. * Items that match any blacklist entry are not considered to match the stanza, regardless of whitelist. * The blacklist setting indicates a filtering strategy that rules out a subset: * Items are considered to match the stanza by default. * Items that match any blacklist entry, and do not match any whitelist entry are considered to not match the stanza. * Items that match any whitelist entry are considered to match the stanza. * More briefly: * whitelist: default no-match -> whitelists enable -> blacklists disable * blacklist: default match -> blacklists disable-> whitelists enable * Can be overridden at the serverClass level, and the serverClass:app level. * Defaults to version > 6.4, the instanceId of the client. This is a GUID string, e.g. 'ffe9fe01-a4fb-425e-9f63-56cc274d7f8b'. * All of these can be used with wildcards. * will match simply '.' to mean '\.' * You can specify simply '*' will cause all hosts in splunk.com, except 'printer' and 'scanner', to # match this server class. # Example with filterType=blacklist: # blacklist.0=* # whitelist.0=*.web.splunk.com # whitelist.1=*.linux.splunk.com # This will cause only the 'web' and 'linux' hosts to match the server class. # No other hosts will match. # Deployment client machine types (hardware type of respective host machines) # can also be used to match DCs. # This filter will DS, however, you can determine the value of DC's machine # type with this Splunk CLI command on the DS: # <code>./splunk list deploy-clients</code> # The <code>utsname</code> values in the output are the respective DCs' white/blacklist filters. Only clients which match the white/blacklist AND which match this. restartSplunkWeb = true | false * If true, restarts SplunkWeb on the client when a member app or a directly configured app is updated. * Can be overridden at the serverClass level and the serverClass:app level. * Defaults to false restartSplunkd = true | false * If true, restarts splunkd on the client when a member app or a directly configured app is updated. * Can be overridden at the serverClass level and the serverClass:app level. * Defaults to false. * defaults to will be the same as on the deployment server. * Can be overridden at the serverClass level and the serverClass:app level. * Defaults to enabled., space, underscore, dash, dot, tilde, and the '@' symbol. It is case-sensitive. # NOTE: # The keys listed below are all described in detail in the # [global] section above. They can be used with serverClass stanza to # override the global setting continueMatching = true | false endpoint = <URL template string> stateOnClient = enabled | disabled | noop repositoryLocation = <path> THIRD LEVEL: app ########### [serverClass:<server class name>:app:<app name>] * This stanza maps an application (which must already exist in repositoryLocation) to the specified server class. * server class name -, space, underscore, dash, dot, tilde, = true | false restartIfNeeded = true | false excludeFromUpdate = <path>[,<path>]... serverclass.conf.example # Version 7 (whitelist|blacklist) text file import. [serverClass:MyApps] whitelist.from_pathname = etc/system/local/clients.txt # Example 6b # Use (whitelist|blacklist) CSV file import to read all values from the Client # field (ignoring all other fields). [serverClass:MyApps] whitelist.select_field = Client whitelist.from_pathname = etc/system/local/clients.csv # Example 6c # Use (whitelist|blacklist) (whitelist|blacklist)* This documentation applies to the following versions of Splunk® Enterprise: 7.2.0 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/7.2.0/Admin/Serverclassconf
2019-10-13T23:47:07
CC-MAIN-2019-43
1570986648343.8
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
WSO2 Carbon is based on Java OSGi technology, which allows components to be dynamically installed, started, stopped, updated, and uninstalled while eliminating component version conflicts. In Carbon, this capability translates to a solid core of common middleware components useful across any enterprise project, plus the ability to add components for specific features needed to solve a specific enterprise scenario. The core set of components in WSO2 Carbon provide WSO2 middleware products with enterprise-class management, security, clustering, logging, statistics, tracing, throttling, caching, and other capabilities as well as a management UI framework. Central to these components is WSO2’s solid and high-performance SOA and Web Services engine. Add-in components encapsulate different functionality. A unified graphical management console can deploy, manage, and view services, processes, process instances, and statistics across the whole platform comprising of different products. As each runtime component is added, associated management components are added to the UI. With a clean front-end/back-end separation between the UI and the runtime, all capabilities can be controlled through a remote WSO2 Carbon UI, or through a clean Web Services interface. We use Carbon as the core of our middleware products reflecting familiar middleware product categories. Most users will start with one of these categories with the assurance that components can be added/removed to facilitate changing business requirements.
https://docs.wso2.com/display/Carbon4411/Carbon+Architecture
2019-10-13T22:26:20
CC-MAIN-2019-43
1570986648343.8
[]
docs.wso2.com
- Install. Prerequisites To use Apigee Edge with Cloud Foundry, you'll need the following: - An Apigee Edge account (Public Cloud or Private Cloud) with an organization where API proxies can be created. To create an Apigee Edge account, see Creating an Apigee Edge account. For more information about Apigee organizations, see Organization structure. - Version 3.1.1 of the Apigee Edge Service Broker for PCF. You can install the tile using instructions in this documentation. - A recent version of the cf command line interface. Your version must include required support for route-services operations. This integration works with v6.20.0. - A Pivotal Cloud Foundry Elastic Runtime (v2.1 to v2.2) deployment. - Apigee Edge Microgateway, depending on the service plan you are using. - The cf CLI v6.20.0 or later, which includes support for route-servicesoperations.
https://docs.apigee.com/api-platform/integrations/cloud-foundry/getting-started-cloud-foundry-integration.html
2019-10-13T22:38:43
CC-MAIN-2019-43
1570986648343.8
[]
docs.apigee.com
Defining website content structure Kentico offers multiple ways to store and organize your website's content. This page describes each approach in detail and lists their advantages and disadvantages. You can store content: - As pages – using a hierarchical structure visualized a content tree. On Portal Engine sites, the content tree also defines the navigation and site map of the website. On MVC sites, the content tree holds the page data, and the structure and navigation of the website is defined by the code of the MVC live site application.. - We do not recommend loading and displaying data from content tree sections that contain more than 100 000 descendant pages within a single list (when retrieving page data on MVC sites or configuring listing web parts on Portal Engine sites). - - 2018 - March - 1 - Article1 - Article2 - ... - 2 - 3 - ... - 31 - April - 1 - ... - 30 - May If you plan on segmenting an even larger amount of content, you can structure the content tree even further. However, note that the 'Alias path' for each item in the content tree cannot exceed 450 characters in length. See limitations of using pages for storing content for more information. Note: While this helps you structure content so that you do not exceed the recommended 1000 child pages limitation, the total number of pages in the system will increase. Consider the recommended maximum number of pages in the system as well. Optimizing performance for very large Portal Engine sites See Optimizing performance of Portal Engine sites for information on how to resolve issues with website performance. One approach for reducing the amount of pages stored in the content tree is setting up archival for outdated pages. Types of content stored in pages While all items in the content tree of the Pages application are pages, there are several kinds of pages that you can distinguish. The options depend on the development model that you use to build your website. On MVC sites, the following general types of pages are available: - Structured pages – contain structured and strongly typed data stored within page fields (the fields depend on specific page types). The content is edited on the Content tab, and then retrieved and displayed by the MVC live site application. - Page builder pages – use the page builder feature to edit content on the Page tab. The page builder allows even non-technical users to design pages on MVC sites using configurable widgets prepared by the developers. See Page builder development for more information. If you need help when deciding between page builder and structured pages, visit Choosing the format of page content to learn how to choose the more suitable approach. On Portal Engine sites, the following general types of pages are available: - Structured pages – contain structured and strongly typed data stored within page fields (the fields depend on specific page types). The content is edited on the Form tab, and then displayed by pages that behave as menu items. - Menu item pages – display content from other structured pages and may also store unstructured content in editable regions that can be edited on the Page tab. These pages are displayed as navigation menu items by default (this can be also customized). This behavior is controlled by editing individual page types in the Page types application and enabling the Behave as Page (menu item) option. The following table compares both approaches to content storage on Portal Engine sites: pages. Use custom tables when you need to: - Access data via the administration interface, but without the need to represent the data in a hierarchical structure in the content tree. - Store large amounts of data in a flat structure using?
https://docs.kentico.com/k12sp/developing-websites/defining-website-content-structure
2019-10-13T23:51:19
CC-MAIN-2019-43
1570986648343.8
[]
docs.kentico.com
UIElement. Measure(Size) UIElement. Measure(Size) UIElement. Measure(Size) UIElement. Measure(Size) Method Definition Updates the DesiredSize of a UIElement. Typically, objects that implement custom layout for their layout children call this method from their own MeasureOverride implementations to form a recursive layout update. public : void Measure(Size availableSize) void Measure(Size availableSize) const; public void Measure(Size. Computation of initial layout positioning in a XAML UI consists of a Measure call and an Arrange call, in that order. During the Measure call, the layout system determines an element's size requirements using the availableSize measurement. During the Arrange call, the layout system finalizes the size and position of an element's bounding box. When a layout is first produced, it always has a Measure call that happens before Arrange. However, after the first layout pass, an Arrange call can happen without a Measure preceding it. This can happen when a property that affects only Arrange is changed (such as alignment), or when the parent receives an Arrange without a Measure. A Measure call will automatically invalidate any Arrange information. Layout updates generally occur asynchronously (at a time determined by the layout system). An element might not immediately reflect changes to properties that affect element sizing (such as Width ). See also フィードバック フィードバックを読み込んでいます...
https://docs.microsoft.com/ja-jp/uwp/api/windows.ui.xaml.uielement.measure
2019-10-14T00:16:36
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
Getting Started Collection of articles that help you get started. - How to install WPTC on your WordPress site - How to push the changes from staging site to your production site? - How do I whitelabel the WPTC plugin on my WordPress site? - How do I stage my WordPress to create life cycle rule on S3 - How do i Exclude / Include Files - How to manually connect your Amazon S3 account to WPTC more securely - How to encrypt your WPTC Datbase backup file - how to include exclude files in creating or copying staging site
https://docs.wptimecapsule.com/category/5-getting-started
2019-10-13T22:43:05
CC-MAIN-2019-43
1570986648343.8
[]
docs.wptimecapsule.com
Project Issues¶ We appreciate your taking the time to report an issue you encountered using the Jupyter Docker Stacks. Please review the following guidelines when reporting your problem. - If you believe you’ve found a security vulnerability in any of the Jupyter projects included in Jupyter Docker Stacks images, please report it to [email protected], not in the issue trackers on GitHub. If you prefer to encrypt your security reports, you can use this PGP public key. - If you think your problem is unique to the Jupyter Docker Stacks images, please search the jupyter/docker-stacks issue tracker to see if someone else has already reported the same problem. If not, please open a new issue and provide all of the information requested in the issue template. - If the issue you’re seeing is with one of the open source libraries included in the Docker images and is reproducible outside the images, please file a bug with the appropriate open source project. - If you have a general question about how to use the Jupyter Docker Stacks in your environment, in conjunction with other tools, with customizations, and so on, please post your question on the Jupyter Discourse site.
https://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/issues.html
2019-10-13T23:37:10
CC-MAIN-2019-43
1570986648343.8
[]
jupyter-docker-stacks.readthedocs.io
Welcome to MinPy’s documentation!¶ MinPy aims at prototyping a pure NumPy interface above MXNet backend. This package targets two groups of users: - The beginners who wish to have a firm grasp of the fundamental concepts of deep learning, and - Researchers who want a quick prototype of advanced and complex algorithms. It is not intended for those who want to compose with ready-made components, although there are enough (layers and activation functions etc.) to get started. As much as possible, MinPy strikes to be purely NumPy-compatible. It also abides to a fully imperative programming experience that is familiar to most users. Letting go the popular approach that mixes in symbolic programming sacrifices some runtime optimization opportunities, in favor of algorithmic expressiveness and flexibility. However, MinPy performs reasonably well, especially when computation dominates. This document describes its main features: - Auto-differentiation - Transparent CPU/GPU acceleration - Visualization using TensorBoard - Learning deep learning using MinPy Basics - MinPy installation guide - Logistic regression tutorial - NumPy under MinPy, with GPU - Autograd in MinPy - Transparent Fallback Advanced Tutorials - Complete solver and optimizer guide - CNN Tutorial - RNN Tutorial - RNN on MNIST - Reinforcement learning with policy gradient - Improved RL with Parallel Advantage Actor-Critic - Complete model builder guide - Learning Deep Learning with MinPy Features - Select Policy for Operations - Show Operation Dispatch Statistics - Select Context for MXNet - Customized Operator - MinPy IO - Limitation and Pitfalls - Supported GPU operators Visualization Developer Documentation History and Acknowledgement
http://minpy.readthedocs.io/en/latest/
2017-06-22T16:22:30
CC-MAIN-2017-26
1498128319636.73
[]
minpy.readthedocs.io
class OEIsMDLStereoGroup : public OESystem::OEUnaryPredicate<OEGroupBase> This class represents OEIsMDLStereoGroup functor that identifies groups (OEGroupBase) that store MDL enhanced stereo information. See also any of the following constants: OESystem::OEUnaryFunction<OEGroupBase, bool> *CreateCopy() const Deep copy constructor that returns a copy of the object. The memory for the returned OEIsMDLStereoGroup object is dynamically allocated and owned by the caller. The returned copy should be deallocated using C++ delete operator in order to prevent a memory leak.
https://docs.eyesopen.com/toolkits/cpp/oechemtk/OEChemClasses/OEIsMDLStereoGroup.html
2017-06-22T16:19:54
CC-MAIN-2017-26
1498128319636.73
[]
docs.eyesopen.com
filtergraph~ Filter editor Description Use the filtergraph~ object to generate filter coefficients for the biquad~ or cascade~ objects with a graphical interface. Examples Discussion The horizontal axis of the filtergraph~ object's display represents frequency and the vertical axis represents amplitude. The curve displayed reflects the frequency response of the current filter model. The frequency response is the amount that the filter amplifies or attenuates the frequencies present in an audio signal. The biquad~ (or cascade~) objects do the actual filtering based on the coefficients that filtergraph~ provides. The cutoff frequency (or center frequency) is the focal frequency of a given filter's activity. Its specific meaning is different for each filter type, but it can generally be identified as a transitional point (or center of a peak/trough) in the graph's amplitude curve. It is marked in the display by a colored rectangle whose width corresponds to the bandwidth of the filter. The bandwidth (the transitional band in Hz) is the principal range of a filter's effect, centered on the cutoff frequency. The edges of a filter's bandwidth are located where the frequency response has a 3dB change in amplitude from the cutoff or center frequency. Q (also known as resonance) describes filter "width" as the ratio of the center/cutoff frequency to the bandwidth. Using Q instead of bandwidth lets us move the center/cutoff frequency while keeping a constant bandwidth across octaves. The Q parameter for shelving filters is often called S (or slope), although it is ostensibly the same as Q. The filter's gain is the linear amplitude at the center or cutoff frequency. The interpretation of the gain parameter depends somewhat on the type of filter. The gain may also affect a shelf or large region of the filter's response. Arguments None. Attributes autoout [int] (default: 0) Toggles the output of coefficients on load. bgcolor [4 floats] Sets the background color in RGBA format. curvecolor [4 floats] Sets the curve color in RGBA format. The style color.attribute is mapped to the dbdisplay [int] (default: 1) Toggles db gain value display. display_flat [int] (default: 1) Toggles flat sign display. domain [2 floats] (default: 20. 20000.) Sets frequency display span. edit_Q [float] Sets the bandwidth (Q) for the currently selected filter. edit_amp [float] Sets the amplitude for the currently selected filter. edit_analog [int] Toggles the analog filter prototype parameter for the currently selected filter when filtergraph~ is in bandpass or peaknotch mode. For more information on analog filter mode, see the message listing above. For single filters, the filter type displayed by the filtergraph~ object is the currently selected filter; when dealing with multiple filters, the currently selected filter is set using the message (with the filters being numbered from 0). edit_displaydot [int] Toggles the display of the mousable bandwidth region for the currently selected filter when filtergraph~ is in display mode. For more information, see the message listing above. For single filters, the filter type displayed by the filtergraph~ object is the currently selected filter; when dealing with multiple filters, the currently selected filter is set using the message (with the filters being numbered from 0). edit_filter [int] (default: 0) Selects which filter to edit if nfilters is greater than 1. Filters are numbered from 0. Possible values: '0' edit_freq [float] Sets center/cutoff frequency for the currently selected filter. edit_gainmode [int] Tggles the gain parameter for the currently selected filter. edit_maxQ [atom] Sets the maximum Q of the currently selected filter. edit_maxamp [atom] Sets the maximum amplitude for the currently selected filter. edit_maxfreq [atom] Sets the upper bound for the frequency of the currently selected filter. edit_minQ [atom] Sets the minimum bandwidth (Q) for the currently selected filter. edit_minamp [atom] Sets the minimum amplitude for the currently selected filter. edit_minfreq [atom] Sets the minimum frequency for the currently selected filter. edit_mode [int] Sets the response shape for the currently selected filter. Possible values: 0 = 'display' ( None/arbitrary ) 1 = 'lowpass' ( Low frequencies pass, high frequencies attenuated ) 2 = 'highpass' ( High frequencies pass, low frequencies attenuated ) 3 = 'bandpass' ( A band of frequencies pass, everything else is attenuated ) 4 = 'bandstop' ( A band of frequencies are attenuated, everything else passes ) 5 = 'peaknotch' ( A band of frequencies is attenuated or boosted depending on the gain ) 6 = 'lowshelf' ( Low frequencies are attenuated or boosted depending on the gain ) 7 = 'highshelf' ( High frequencies are attenuated or boosted depending on the gain ) 8 = 'resonant' ( Another bandpass filter ) 9 = 'allpass' ( All frequencies pass through but the phase is affected ) fullspect [int] (default: 0) Display the full frequency spectrum from -Nyquist to +Nyquist. hcurvecolor [4 floats] Sets the selection color in RGBA format. The style color.attribute is mapped to the linmarkers [16 floats] (default: 5512.5 11025. 16537.5) Marker positions for the linear frequency display. By default, the markers are set at ± SampleRate/4, SampleRate/2, and (3 * SampleRate)/4. logamp [int] (default: 1) Sets the amplitude display mode. Possible values: 0 = 'Linear' ( Linear amplitude display ) Displays amplitudes using a linear scale. 1 = 'Logarithmic' ( Log amplitude display ) Displays amplitudes using a logarithmic scale. logfreq [int] (default: 1) Sets the frequency display mode. Possible values: 0 = 'Linear' ( Linear frequency display ) Displays frequencies using a linear scale. 1 = 'Logarithmic' ( Log frequency display ) Displays frequencies using a logarithmic scale. logmarkers [16 floats] (default: 10. 100. 1000. 10000.) Marker positions for the log frequency display. By default, the markers are set at± 50Hz, 500Hz and 5kHz at 44.1kHz. These values correspond to ± 0.007124, 0.071238, and 0.712379 radians for any sample rate. markercolor [4 floats] Sets the grid color in RGBA format. The style color.attribute is mapped to the nfilters [int] (default: 1) Number of cascaded biquad filters displayed. The range is between 1 and 24. When using more than one filter, the output of the filtergraph~ should be sent to a cascade~ object instead of a biquad~. numdisplay [int] (default: 1) Toggles numerical value display. parameter_enable [int] Enables use of this object with Max for Live Parameters and allows for setting initial parameter values in the Max environment. phasespect [int] (default: 0) Toggles phase response display. range [2 floats] (default: 0.0625 16.) Sets the amplitude display range. style [symbol]7.0.0 Sets the style to be applied to the object. Styles can be set using the Format palette. textcolor [4 floats] Sets the color of the labels in RGBA format. The style color.attribute is mapped to the In 1st-5th inlets: When in display mode, a float in one of the first five inlets changes the current value of the corresponding biquad~ filter coefficient (a0, a1, a2, b1, and b2, respectively), recalculates the filter's frequency response based on these coefficients and causes a list of the current filter coefficients to be output from the leftmost outlet.. float Arguments. list Arguments a1 [float] a2 [float] b1 [float] b2 [float] in 6th inlet: A list of three values which correspond to center/cutoff frequency, gain and Q/S (resonance/slope), sets these values, recalculates the new filter coefficients and causes output. This is equivalent to the message. anything Arguments bandpass Arguments gain [float] Q [float] bandstop Arguments gain [float] Q [float] allpass Arguments gain [float] Q [float] analog Arguments dictionary Arguments display Arguments displaydot Arguments cascade Arguments constraints Arguments minimum-frequency [float] maximum-frequency [float] minimum-gain [float] maximum-gain [float] minimum-Q [float] maximum-Q [float] flat Arguments Lowpass and highpass filters: Q values set to 0.707, and gain coefficients are set to 1. (0dB) Band pass and band stop filters are ignored. All other filter types: The gain coefficients are set to 1. (0dB) highpass Arguments gain [float] Q [float] highorder Arguments highshelf Arguments gain [float] slope [float] gainmode Arguments markers Arguments mode Arguments Number - Filter type 0 - display only 1 - lowpass 2 - highpass 3 - bandpass 4 - bandstop 5 - peaknotch 6 - lowshelf 7 - highshelf 8 - resonant 9 - allpass In display mode, filtergraph~ displays the frequency response for a set of five biquad~ filter coefficients. In the other modes, it graphs the frequency response of a filter based on three parameters: cf (center frequency, or cutoff frequency) gain, and Q (resonance) or S (slope - used for the shelving filters). (mouse) If multiple bandwidth regions are overlapping, you can cycle through them by double-clicking on the topmost one. This is useful for accessing smaller bandwidth regions that might be otherwise "covered" by a larger region. mousemode Arguments y-response-flag [int] For horizontal movement (specified by the first argument), normal behavior means that clicking on the filter band and dragging horizontally changes the filter's cutoff frequency. When set to the alternate mouse mode (2), horizontal movement affects Q, or resonance. When turned off (0), mouse activity along the x-axis has no effect. For vertical movement (specified by the second argument), normal behavior means that the y-axis is mapped to gain during clicking and dragging activity. When the alternate mouse mode (2) is selected, vertical movement changes the Q (resonance) setting instead. When turned off (0), vertical mouse movement has no effect. lowpass Arguments gain [float] Q [float] lowshelf Arguments gain [float] slope [float] peaknotch Arguments gain [float] Q [float] options Arguments filter-mode [int] gain-enable-flag [int] analog-filter-prototype-flag [int] interactive-filter-mode-flag [int] params Arguments frequency [float] gain [float] Q [float] query Arguments selectfilt Arguments set Arguments a1/gain [float] a2/Q [float] b1 [float] b2 [float] in 6th inlet: A list of three values which correspond respectively to center/cutoff frequency, gain and Q/S (resonance/slope), sets these values, recalculates the new filter coefficients but does not cause output. In display mode this message has no effect. resonant Arguments gain [float] Q [float] setconstraints Arguments minimum-frequency [float] maximum-frequency [float] minimum-gain [float] maximum-gain [float] minimum-Q [float] maximum-Q [float] setfilter Arguments setoptions Arguments filter-mode [int] gain-enable-flag [int] analog-filter-prototype-flag [int] interactive-filter-mode-flag [int] setparams Arguments frequency [float] gain [float] Q [float] whichfilt Arguments Output float Out second through fifth outlets: Frequency, Gain (linear), Resonance (Q) and Bandwidth output in response to clicks on the filtergraph~ object. int Out rightmost (seventh) outlet: Filter number. Indicates which of the cascaded biquad filters is being highlighted and/or edited. list Out leftmost outlet: a list of 5 floating-point filter coefficients for the biquad~ object. Coefficients output in response to mouse clicks and changes in the coefficient or filter parameter inlets. They are also output when the audio is turned on, and optionally when the patch is loaded if the automatic output option is turned on (see message). Out sixth outlet: a list of 2 floating-point values (amplitude, phase) output in response to the message (see above).
https://docs.cycling74.com/max7/maxobject/filtergraph~
2017-06-22T16:42:25
CC-MAIN-2017-26
1498128319636.73
[array(['images/filtergraph~.png', None], dtype=object)]
docs.cycling74.com
. Dimension Structure The dimension structure for a time dimension depends on how the underlying data source stores the time period information. This difference in storage produces two basic types of time dimensions: Time dimension Time dimensions are similar to other dimensions in that a dimension table supplies the attributes for the dimension. Each column in the dimension main table defines an attribute for a specific time period. Like other dimensions, the fact table has a foreign key relationship to the dimension table for the time dimension. The key attribute for a time dimension is based either on an integer key or on the lowest level of detail, such as the date, that appears in the dimension main table. Server time dimension If you do not have a dimension table to which to bind time-related attributes, you can have Analysis Services define a server time dimension based on time periods. To define the hierarchies, levels, and members represented by the server time dimension, you select standard time periods when you create the dimension. Attributes in a server time dimension have a special time-attribute binding. Analysis Services uses the attribute types that are related to dates, such as Year, Month, or Day, to define members of attributes in a time dimension. After you include the server time dimension in a cube, you set the relationship between the measure group and the server time dimension by specifying a relationship on the Define Dimension Usage page of the Cube Wizard.. Adding Time Intelligence with the Business Intelligence Wizard. Note You cannot use the Business Intelligence Wizard to add time intelligence to server time dimensions. The Business Intelligence Wizard adds a hierarchy to support time intelligence, and this hierarchy must be bound to a column of the time dimension table. Server time dimensions do not have a corresponding time dimension table and therefore cannot support this additional hierarchy. See Also Create a Time Dimension by Generating a Time Table Business Intelligence Wizard F1 Help Dimension Types
https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models/database-dimensions-create-a-date-type-dimension
2017-06-22T17:38:43
CC-MAIN-2017-26
1498128319636.73
[]
docs.microsoft.com
Configuring Parallels RAS RDSH Servers for use with AppsAnywhere Overview This article provides instructions on how to install and configure a Parallels RAS RD Session Host (RDSH) server, so it can be used to dynamically launch and deliver Cloudpaged applications via AppsAnywhere. It is assumed that the RDSH server has already been configured as part of the Parallels RAS Farm and they have access to AppsAnywhere to run Cloudpaged applications. Refer to the Add an RD Session Host section of the RAS Administrators Guide if servers need adding to the Farm. The steps in this article apply to Windows Server 2008 R2 and later. Note Addition, removal and management of the RDSH servers within the RAS farm is completed by customers AppsAnywhere client deployment and upgrades on the RDSH servers are the customer's responsibility AppsAnywhere Support cannot install and manage the clients or RDSH servers for customers Installing the Client and Player The latest AppsAnywhere Client and Cloudpaging Player installers can be obtained from Latest Releases. NOTE: If the intention is for users to have access to OneDrive for file saving, then the AppsAnywhere OneDrive Launcher will also need to be installed. Managed Deployment It is recommended that you use a managed deployment method such as Group Policy or SCCM to perform the installation of the AppsAnywhere Client and Cloudpaging Player. This will allow you to easily carry out future updates on an RDSH server(s). You can do this by following Deploying AppsAnywhere Client. Manual Deployment Alternatively you can download the AppsAnywhere Client and perform a manual installation by logging on to the Parallels RAS RDSH server and accessing AppsAnywhere. If performing a manual installation, simply step through the installer selecting all of the default options. Assuming that AppsAnywhere is configured to pre-deploy Cloudpaging Player, this will be installed the first time the AppsAnywhere Client starts, straight after the installation. Otherwise, you will also need to download and install Cloudpaging Player manually. Publishing the AppsAnywhere Launcher(s) in Parallels RAS In order for the AppsAnywhere Client to receive instructions the AppsAnywhereLauncher.exe must be added as a published resource within the Parallels RAS Farm/Site. NOTE: This only needs to be completed during initial setup. Launch the Parallels Remote Access Server Console and select the Publishing option from the left-hand tree. Right click on the Publishing Resources entry and select Add. Alternatively click the Add... button at the bottom of the window. On the Select Item Type dialog select Application and click Next. On the Select Server Type dialog select RD Session Hosts and click Next. On the Select Application Type dialog select Single Application and click Next. On the Publish From dialog select the relevant RDSH Server(s) or Server Groups and click Next. Click the ... button next to the Target field and browser to the AppsAnywhereLauncher.exe file. This EXE is located in the AppsAnywhere Client installation directory "\Program Files\Software2\AppsAnywhere\". Once complete, click Finish. Each application published within Parallels RAS is assigned an ID number. You need to make a note of this number as it will be required when publishing applications within AppsAnywhere for use on this Parallels RAS RDSH server. In this example the AppsAnywhereLauncher.exe published application has an ID of 13. E.g :- Once the AppsAnywhere client is published and you have a note of the Application ID, the Parallels RAS system is configured to dynamically receive and deliver Cloudpaged applications via AppsAnywhere. If the the intention is for users to have access to OneDrive for file saving, then the AppsAnywhere OneDrive Launcher will also need to be published in the same way.
https://docs.appsanywhere.com/appsanywhere/2.12/configuring-parallels-ras-rdsh-servers-for-use-wit
2022-09-25T04:37:04
CC-MAIN-2022-40
1664030334514.38
[array(['https://support.appsanywhere.com/hc/article_attachments/360029129753/mceclip8.png', None], dtype=object) array(['https://support.appsanywhere.com/hc/article_attachments/360028316134/mceclip10.png', None], dtype=object) array(['https://support.appsanywhere.com/hc/article_attachments/360029129933/mceclip11.png', None], dtype=object) array(['https://support.appsanywhere.com/hc/article_attachments/360029130673/mceclip16.png', None], dtype=object) array(['https://support.appsanywhere.com/hc/article_attachments/360028317534/mceclip18.png', None], dtype=object) array(['https://support.appsanywhere.com/hc/article_attachments/360028318474/mceclip20.png', None], dtype=object) array(['https://support.appsanywhere.com/hc/article_attachments/360028318674/mceclip21.png', None], dtype=object) ]
docs.appsanywhere.com
How to Join Fedora Resources This page holds resources related to “How to Join Fedora”. The printouts explain the basic steps to joining Fedora and connecting with the community. The slide presentation covers a lot of material. These resources are meant to be used by anyone implementing outreach for Fedora. Slide Presentation This “How to Join Fedora” slide presentation has been designed for general use. There are PDF and ODP files available. The typeface used throughout the deck is Montserrat. Find the files here.
https://docs.fedoraproject.org/ast/commops/latest/design-assets/joining-printouts/
2022-09-25T05:09:47
CC-MAIN-2022-40
1664030334514.38
[]
docs.fedoraproject.org
Displays the attributes of a column, including whether it is a single-column primary or secondary index and, if so, whether it is unique. Privileges You must either own the table, join index, or hash index in which the column is defined or have at least one privilege on that table. Use the SHOW privilege to enable a user to perform HELP or SHOW requests only against a specified table, join index, or hash index.
https://docs.teradata.com/r/Teradata-VantageTM-SQL-Data-Definition-Language-Syntax-and-Examples/September-2020/Table-Statements/HELP-COLUMN
2022-09-25T05:03:06
CC-MAIN-2022-40
1664030334514.38
[]
docs.teradata.com
Difference between revisions of "Databases" Revision as of 14:45, 27 October 2011 This page is part of the Getting Started Section. In BaseX, a single database contains an arbitrary number of resources. By default, all resources have been XML documents. Since Version 7.0, however, raw files (binaries) can be stored as well. Valid database names are defined by [_-a-zA-Z0-9]+. Contents Manage Resources Once you have created a database, additional commands exist to modify its contents: - XML documents can be added with the ADDcommand. - Raw files are added with STORE. - Any resource can be replaced with another using the REPLACEcommand. - Any resources can be deleted with the DELETEcommand. To speed up the insertion of new documents in bulk operations, you can turn the AUTOFLUSH option off. The following commands create an empty database, add two resources and finally delete them again: CREATE DB example SET AUTOFLUSH false ADD example.xml ADD ... STORE TO images/ 123.jpg The stored resources RETRIEVEcommand returns raw files without modifications. - The XQuery function db:retrieve("dbname", "path/to/docs")returns raw files in their Base64 representation. By choosing "method=raw"as Serialization Option, the data is returned in its raw form: declare option output:method "raw"; db:retrieve('multimedia', 'sample.avi')
https://docs.basex.org/index.php?title=Databases&diff=prev&oldid=5109
2022-09-25T05:33:42
CC-MAIN-2022-40
1664030334514.38
[]
docs.basex.org
Geo Template The geo template lets you build your visualizations on top of geographic maps. In Kumu Enterprise, the geo template will not work out of the box—first, you need to configure geocoding. For more information, see the Configuration guide. Apply the Geo template using the Basic Editor To apply the Geo template using the Basic Editor: - Click the Settings icon on the right side of your map to open the Basic Editor. - Click MORE OPTIONS and select Customize view defaults from the list. - Scroll down to the General settings section. - In that section, you can use the Template dropdown menu to select the geo template. Move back to the main Basic Editor panel, then click SAVE to finish the process. Apply the Geo template using the Advanced Editor You can activate the geo template by opening the Advanced Editor (keyboard shortcut: press .) and adding template: geo; to the @settings block, like so: . Pick your map style You can use the geo-style property to pick between these different map styles: geo-style: auto Use geo-style: dark for a dark version of the auto map. geo-style: streets geo-style: satellite Good to know: - Geo maps cannot be exported to PDF or PNG. -. - Screenshots and PDFs are not currently supported for geo. - By default, the geo template limits you to squares and straight lines, but if you want to test out the full range of decorations (circles, borders, curved lines, flags, etc.), add renderer: canvas;to your @settingsblock. - You can use the scale-maxand scale-minproperties to adjust the minimum and maximum allowed zoom levels for your readers. See the settings reference for more guidance.
https://docs.kumu.io/guides/templates/geo.html
2022-09-25T04:51:07
CC-MAIN-2022-40
1664030334514.38
[array(['../../images/honolulu-geo.png', 'Geo map showing latitude and longitude for Honolulu'], dtype=object) array(['../../images/geo-style-auto.png', 'geo-style auto'], dtype=object) array(['../../images/geo-style-streets.png', 'geo-style streets'], dtype=object) array(['../../images/geo-style-satellite.png', 'geo-style satellite'], dtype=object) ]
docs.kumu.io
Substance Profile A Substance profile provides information of an ingredient used to formulate a medicinal product. The Profile page enables you to view the details of a substance. Substance profile page includes various facets such as Products and Equivalent substances. The Substance profile page lets you perform certain operations such as view, create, edit, and delete. By default, a Substance profile page appears in the Viewing mode. This view mode allows you to read and print Substance Profile information. When you select the Editing mode, all the text fields become editable and Add buttons appear on the facets. These Add buttons enables you to add additional attributes for the facets. Delete enables you to remove a substance profile. Substance Profile View The Substance profile page includes Products and Equivalent Substances facets to show information of an ingredient used to formulate a medicinal product. Products: It shows the medicinal products that contains the selected substance. This uses the relationship type ‘IsIngredientOf’. Equivalent Substance: It shows the other substances that have the same quality of the selected substance. This uses the relationship type “HasPeer". You can view, create, and edit Substance profiles and the facets that are part of the profile page.
https://docs.reltio.com/datadrivenapps/lsproductthreesixtysubstanceprofile.html
2022-09-25T06:02:25
CC-MAIN-2022-40
1664030334514.38
[array(['../images/dda/prod360/dda_lsprod360_subsprofvw.png', None], dtype=object) ]
docs.reltio.com
Prerequisites to Installing Salesforce Connector You need to ensure that you have valid credentials, entitlements, user accounts, and other prerequisites before installing Salesforce Connector. The following are the prerequisites to install Salesforce Connector: - Valid credentials for Reltio Connected Cloud tenant including tenant id, tenant environment, username, and password. - Salesforce Connector entitlement for the Reltio Connected Cloud tenant. - Confirm that Salesforce is added as a source in the Reltio Connected Cloud tenant metadata configuration. - Lightning version is switched on for your Salesforce organization. - User account with the following user roles: - ROLE_SFDC_CONNECTOR - ROLE_API (For SBC) - Salesforce Administrator Account with Security Token - This account is used by the managed package to perform operations in Salesforce. - Salesforce - Custom Domain is to be set, enabled, and deployed to all users. For more information, see My Domain.
https://docs.reltio.com/salesforce/sfdcprerequisites.html
2022-09-25T06:14:31
CC-MAIN-2022-40
1664030334514.38
[]
docs.reltio.com
Gas Comae ( sbpy.activity.gas)¶ Photolysis¶ Two functions provide reference data for the photolysis of gas molecules in optically thin comae: photo_lengthscale() and photo_timescale(). The source data is stored in sbpy.activity.gas.data. photo_lengthscale() provides empirical comae lengthscales (defaults to Cochran and Schleicher 1993)): >>> from sbpy.activity import gas >>> gas.photo_lengthscale(None) Traceback (most recent call last): ... ValueError: Invalid species None. Choose from: H2O [CS93] OH [CS93] >>> gas.photo_lengthscale('H2O') <Quantity 24000. km> Use photo_timescale() to retrieve photolysis timescales: >>> gas.photo_timescale(None) Traceback (most recent call last): ... ValueError: Invalid species None. Choose from: CH3OH [C94] CN [H92] CO [CE83] CO2 [CE83] H2CO [C94] H2O [CS93] HCN [C94] OH [CS93] >>> gas.photo_timescale('H2O') <Quantity 52000. s> Some sources provide values for the quiet and active Sun (Huebner et al. 1992): >>> gas.photo_timescale('CN', source='H92') <Quantity [315000., 135000.] s> With the Bibliography Tracking Module (sbpy.bib), the citation may be discovered: >>> from sbpy import bib >>> bib.reset() # clear any old citations >>> with bib.Tracking(): ... tau = gas.photo_timescale('H2O') >>> print(bib.to_text()) sbpy: software: sbpy: Mommert, Kelley, de Val-Borro, Li et al. 2019, The Journal of Open Source Software, Vol 4, 38, 1426 sbpy.activity.gas.core.photo_timescale: H2O photodissociation timescale: Cochran & Schleicher 1993, Icarus, Vol 105, 1, 235 Fluorescence¶ Reference data for fluorescence band emission is available via fluorescence_band_strength(). Compute the fluorescence band strength (luminosity per molecule) of the OH 0-0 band at 1 au from the Sun, moving towards the Sun at 1 km/s (defaults to Schleicher and A’Hearn 1988): >>> import astropy.units as u >>> LN = gas.fluorescence_band_strength('OH 0-0', -1 * u.km / u.s) >>> print(LN) [1.54e-15] erg / s Gas coma models¶ The Haser (1957) model for parent and daughter species is included, with some calculation enhancements based on Newburn and Johnson (1978). With Haser, we may compute the column density and total number of molecules within aperture: >>> Q = 1e28 / u.s # production rate >>> v = 0.8 * u.km / u.s # expansion speed >>> parent = gas.photo_lengthscale('H2O') >>> daughter = gas.photo_lengthscale('OH') >>> coma = gas.Haser(Q, v, parent, daughter) >>> print(coma.column_density(10 * u.km)) 7.099280153851781e+17 1 / m2 >>> print(coma.total_number(1000 * u.km)) 1.161357452192558e+30 The gas coma models work with sbpy’s apertures: >>> from sbpy.activity import AnnularAperture >>> ap = AnnularAperture((5000, 10000) * u.km) >>> print(coma.total_number(ap)) 3.8133654170856037e+31 Production Rate calculations¶ Various functions that aid in the calculation of production rates are offered. Phys has a function called from_jplspec which takes care of querying the JPL Molecular Spectral Catalog through the use of jplspec and calculates all the necessary constants needed for production rate calculations in this module. Yet, the option for the user to provide their own molecular data is possible through the use of an Phys object, as long as it has the required information. It is imperative to read the documentation of the functions in this section to understand what is needed for each. If the user does not have the necessary data, they can build an object using JPLSpec: >>> from sbpy.data.phys import Phys >>> import astropy.units as u >>> temp_estimate = 47. * u.K >>> transition_freq = (230.53799 * u.GHz).to('MHz') >>> integrated_flux = 0.26 * u.K * u.km / u.s >>>>> mol_data = Phys.from_jplspec(temp_estimate, transition_freq, mol_tag) >>> mol_data <QTable length=1> t_freq temp lgint300 ... degfreedom mol_tag MHz K MHz nm2 ... float64 float64 float64 ... int64 int64 -------- ------- --------------------- ... ---------- ------- 230538.0 47.0 7.591017628812526e-05 ... 2 28001 Having this information, we can move forward towards the calculation of production rate. The functions that sbpy currently provides to calculate production rates are listed below. Integrated Line Intensity Conversion¶ The JPL Molecular Spectroscopy Catalog offers the integrated line intensity at 300 K for a molecule. Yet, in order to calculate production rate, we need to know the integrated line intensity at a given temperature. This function takes care of converting the integrated line intensity at 300 K to its equivalent in the desired temperature using equations provided by the JPLSpec documentation. For more information on the needed parameters for this function see intensity_conversion. >>> from sbpy.activity import intensity_conversion >>> intl = intensity_conversion(mol_data) >>> mol_data.apply([intl.value] * intl.unit, name='intl') 11 >>> intl <Quantity 0.00280051 MHz nm2> Einstein Coefficient Calculation¶ Einstein coefficients give us insight into the molecule’s probability of spontaneous absorption, which is useful for production rate calculations. Unlike catalogs like LAMDA, JPLSpec does not offer the Eistein coefficient and it must be calculated using equations provided by the JPL Molecular Spectroscopy Catalog. These equations have been compared to established LAMDA values of the Einstein Coefficient for HCN and CO, and no more than a 24% difference has been found between the calculation from JPLSpec and the LAMDA catalog value. Since JPLSpec and LAMDA are two very different catalogs with different data, the difference is expected, and the user is allowed to provide their own Einstein Coefficient if they want. If the user does want to provide their own Einstein Coefficient, they may do so simply by appending their value with the unit 1/s to the Phys object, called mol_data in these examples. For more information on the needed parameters for this function see einstein_coeff. >>> from sbpy.activity import einstein_coeff >>> au = einstein_coeff(mol_data) >>> mol_data.apply([au.value] * au.unit, name = 'Einstein Coefficient') 12 >>> au <Quantity 7.03946054e-07 1 / s> Beta Factor Calculation¶ Returns beta factor based on timescales from gas and distance from the Sun using an Ephem object. The calculation is parent photodissociation timescale * (distance from comet to Sun)**2 and it accounts for certain photodissociation and geometric factors needed in the calculation of total number of molecules total_number If you wish to provide your own beta factor, you can calculate the equation expressed in units of AU**2 * s , all that is needed is the timescale of the molecule and the distance of the comet from the Sun. Once you have the beta factor you can append it to your mol_data phys object with the name ‘beta’ or any of its alternative names. For more information on the needed parameters for this function see beta_factor. >>> from astropy.time import Time >>> from sbpy.data import Ephem >>> from sbpy.activity import beta_factor >>>>> time = Time('2017-12-22 05:24:20', format = 'iso') >>> ephemobj = Ephem.from_horizons(target, epochs=time.jd) >>> beta = beta_factor(mol_data, ephemobj) >>> mol_data.apply([beta.value] * beta.unit, name='beta') 13 >>> beta <Quantity [13333365.25745597] AU2 s> Simplified Model for Production Rate¶ activity provides several models to calculate production rates in comets. One of the models followed by this module is based on the following paper: The following example shows the usage of the function. This LTE model does not include photodissociation, but it does serve as way to obtain educated first guesses for other models within sbpy. For more information on the parameters that are needed for the function see from_Drahus. >>> from sbpy.activity import LTE >>> vgas = 0.5 * u.km / u.s # gas velocity >>> lte = LTE() >>> q = lte.from_Drahus(integrated_flux, mol_data, ephemobj, vgas, aper, b=b) >>> q <Quantity 3.59397119e+28 1 / s> LTE Column Density Calculation¶ To calculate a column density with no previous column density information, we can use equation 10 from Bockelee-Morvan et al. 2004. This function is very useful to obtain a column density with no previous guess for it, and also useful to provide a first guess for the more involved Non-LTE model for column density explained in the next section. >>> cdensity = lte.cdensity_Bockelee(integrated_flux, mol_data) >>> mol_data.apply([cdensity.value] * cdensity.unit, name='cdensity') Non-LTE Column Density Calculation¶ Once the user has a guess for their column density, the user can then implement the sbpy.activity NonLTE function sbpy.activity.NonLTE.from_pyradex. This function calculates the best fitting column density for the integrated flux data using the python wrapper pyradex of the Non-LTE iterative code RADEX. The code utilizes the LAMDA catalog collection of molecular data files, presently this is the only functionality available, yet in the future a function will be provided by sbpy to build your own molecular data file from JPLSpec for use in this function. The code will look for a ‘cdensity’ column value within mol_data to use as its first guess. For a more detailed look at the input parameters, please see from_pyradex. >>> from sbpy.activity import NonLTE >>> nonlte = NonLTE() >>> cdensity = nonlte.from_pyradex(integrated_flux, mol_data, iter=500) >>> mol_data.apply([cdensity.value] * cdensity.unit, name='cdensity') Note that for this calculation the installation of pyradex is needed. Pyradex is a python wrapper for the RADEX fortran code. See pyradex installation and README file for installation instruction and tips as well as a briefing of how pyradex works and what common errors might arise. You need to make sure you have a fortran compiler installed in order for pyradex to work (gfortran works and can be installed with homebrew for easier management). Total Number¶ In order to obtain our total number of molecules from flux data, we use the millimeter/submillimeter spectroscopy beam factors explained and detailed in equation 1.3 from: Drahus, M. (2010). Microwave observations and modeling of the molecularcoma in comets. PhD Thesis, Georg-August-Universität Göttingen. If the user prefers to give the total number, they may do so by appending to the mol_data Phys object with the name total_number or any of its alternative names. For more information on the needed parameters for this function see total_number. >>> from sbpy.activity import total_number >>> integrated_flux = 0.26 * u.K * u.km / u.s >>> b = 0.74 >>> aper = 10 * u.m >>> tnum = total_number(integrated_flux, mol_data, aper, b) >>> mol_data.apply([tnum], name='total_number') 14 >>> tnum <Quantity [2.93988826e+26]> Haser Model for Production Rate¶ Another model included in the module is based off of the model in the following literature: This model is well-known as the Haser model. In the case of our implementation the function takes in an initial guess for the production rate, and uses the module found in gas to find a ratio between the model total number of molecules and the number of molecules calculated from the data to scale the model Q and output the new production rate from the result. This LTE model does account for the effects of photolysis. For more information on the parameters that are needed for the function see from_Haser(). >>> from sbpy.activity import Haser, photo_timescale, from_Haser >>> Q_estimate = 3.5939*10**(28) / u.s >>> parent = photo_timescale('CO') * vgas >>> coma = Haser(Q_estimate, vgas, parent) >>> Q = from_Haser(coma, mol_data, aper=aper) >>> Q <Quantity [[9.35795579e+27]] 1 / s> For more involved examples and literature comparison for any of the production rate modules, please see notebook examples. Reference/API¶ activity.gas core
https://sbpy.readthedocs.io/en/latest/sbpy/activity/gas.html
2022-01-17T02:05:59
CC-MAIN-2022-05
1642320300253.51
[]
sbpy.readthedocs.io
Agriculture and Agri-Food Canada Agricultural Value Added Corporation Ltd., (AVAC Ltd.) Alberta Heritage Foundation for Medical Research (AHFMR) Alberta Ingenuity Fund Alberta Advanced Education and Technology (AET) Alberta Agricultural Research Institute (AARI) Alberta Energy Research Institute (AERI) Alberta Forestry Research Institute (AFRI) Alberta Innovation and Science Research Investment Program University Research and Strategic Investments (URSI) British Columbia Innovation Council (BCIC) British Columbia Ministry of Agriculture and Lands Canada Saskatchewan Irrigation Diversification Centre (CSIDC) Canadian Academy of Engineering Canada Foundation for Innovation Canadian Space Agency Canadian Science and Technology Growth Fund Canadian Arthritis Networ Centre for Microelectronics Assembly and Packaging Centre for Research in Earth and Space Technology (CRESTech) Climate Change Central Department of National Defence – Defence Research & Development Canada Forest Renewal of British Colombia Launchworks Inc. of Calgary, AB Materials and Manufacturing Ontario Ministère du Développement économique, de l’Innovation, et de l’Exportation National Research Council of Canada Natural Resources Canada – GeoConnections Climate Change Technology and Innovation Natural Sciences & Engineering Research Council Ontario Centres of Excellence Ontario R&D Challenge Fund Ontario Ministry of Mines and Northern Development Ontario Ministry of Natural Resources Ontario Ministry of Research and Innovation Ontario Research Commercialization Program Innovation Demonstration Fund Next Generation of Jobs Fund – Strategic Opportunities Program Biopharmaceutical Investment Program Green Schools Pilot Initiative GreenFIT PRECARN Associates Inc. Saskatchewan Research Council Saskatchewan Agriculture and Food SickKids Hospital (Toronto, ON) – Corporate Ventures TEC Edmonton – Venture Prize Award Teck Cominco Limited The GEOIDE Network of Centres of Excellence
http://docs.futureinnovate.net/?page_id=230
2022-01-17T02:08:15
CC-MAIN-2022-05
1642320300253.51
[]
docs.futureinnovate.net
Thank you for choosing Blesta! This section is designed to help you get up and running with your new or upgraded installation as quickly as possible. Already Installed? If Blesta is already installed, you might be interested in learning more about Using Blesta. Getting Started Requirements Minimum and recommended system requirements. Installing Blesta Installation steps and common issues. Upgrading Blesta Upgrading steps and common issues. Migrating to Blesta How-to on migrating from a competing system. Logging In Logging in and configuring MOTP or TOTP. Enabling Two-Factor Enabling two-factor authentication with TOTP or MOTP. Moving Blesta How to move Blesta to a new server. Important Terms Installation steps and common issues. Debugging / Tools Tweaking the Configs Making changes to config files that can impact the behavior of Blesta.
https://docs.blesta.com/display/user/Getting+Started
2022-01-17T01:44:22
CC-MAIN-2022-05
1642320300253.51
[]
docs.blesta.com
Test summary Connect an Apple iPhone 12 Pro phone (unlocked) to the Celona CBRS network. Apple iPhone 12 Pro phone details Software Version: 14.4.2 Model Name: iPhone 12 Pro Model Number: MGJW3LL/A Carrier Lock: No SIM restrictions Prerequisites Be within range of your Celona RAN, consisting of active access points. Acquire a Celona SIM, activate the SIM in the Celona Orchestrator, and assign it to the Celona Edge instance that corresponds to your Celona RAN topology. We assume a start with the Apple iPhone 12 Pro phone with Wi-Fi disabled, no SIM installed, and having just been through the Reset Network Settings process. Configuration steps Starting pretty simple: remove the SIM cover, install the Celona SIM, replace the SIM cover and power on the phone. 👍 If the cellular strength icon continues to show no signal, you will need to perform a toggle in the cellular settings to obtain signal. In order to do so, open Settings > Cellular > Cellular Data Options and toggle Voice & Data from LTE to 3G and back to LTE. Navigate back to the main screen. Within 10 seconds, verify the cellular information in the upper right shows signal strength and LTE. As one last step, you can verify healthy network connection with a performance tool of your choice. 🙌 To see the Celona solution in action, check out our getting started guide.
https://docs.celona.io/en/articles/5173719-apple-iphone-12-pro-on-celona
2022-01-17T00:44:09
CC-MAIN-2022-05
1642320300253.51
[array(['https://downloads.intercomcdn.com/i/o/328245033/b5b8d55788c55f6190a05fa0/image.png?expires=1619355600&signature=cc1086b9fb32f36952a959db4c1722b24f2d7deb850573a5e9c248accb4589f1', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/328240532/000e00c7fda44aee9673d8ca/image.png?expires=1619355600&signature=9b29cd749b9b9265cfdb5f2e24792844bc7c42d4c276837f593e7d461ee48fa8', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/328239727/d7828b535502de830fdba661/image.png?expires=1619355600&signature=dd283379aee2247f7c8fbae6179e65073a48531bf3b41d762863ad328b4f96cf', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/328239338/ba2803f40b8b9fe0a623b1de/image.png?expires=1619355600&signature=6d22ec410a7b2bced57ebbd8c89e9309e5f5fbcb556158cd6ec3c40f906c77d2', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/328237920/9293a330377075083bd80dd8/image.png?expires=1619355600&signature=589e61ba1e83602121bc6bbc70bda3544d7cc2ac6a9ba64a55c1bd6a252f9c8b', None], dtype=object) ]
docs.celona.io
Priority Queue<TElement,TPriority> Class Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Represents a collection of items that have a value and a priority. On dequeue, the item with the lowest priority value is removed. generic <typename TElement, typename TPriority> public ref class PriorityQueue public class PriorityQueue<TElement,TPriority> type PriorityQueue<'Element, 'Priority> = class Public Class PriorityQueue(Of TElement, TPriority) Type Parameters - TElement - Specifies the type of elements in the queue. - TPriority - Specifies the type of priority associated with enqueued elements. - Inheritance - Remarks Implements an array-backed, quaternary min-heap. Each element is enqueued with an associated priority that determines the dequeue order. Elements with the lowest priority are dequeued first. Note that the type does not guarantee first-in-first-out semantics for elements of equal priority.
https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.priorityqueue-2?view=net-6.0&viewFallbackFrom=net-5.0
2022-01-17T02:45:19
CC-MAIN-2022-05
1642320300253.51
[]
docs.microsoft.com
ServicePulse is unable to connect to ServiceControl - See the ServiceControl release notes troubleshooting section for guidance on detecting ServiceControl HTTP API accessibility. - Verify that ServicePulse is trying to access the correct ServiceControl URI (based on ServiceControl instance URI defined in ServicePulse installation settings). - Check that ServicePulse is not blocked from accessing the ServiceControl URI by firewall settings. ServicePulse reports that 0 endpoints are active, although endpoint plugins were deployed - Follow the guidance in How to configure endpoints for monitoring by ServicePulse. - Restart the endpoint after copying the endpoint plugin files into the endpoint's bindirectory. - Ensure auditing is enabled for the endpoint, and the audited messages are forwarded to the correct audit and error queues monitored by ServiceControl. - Ensure relevant ServiceControl assemblies are not in the list of assemblies to exclude from scanning. For more details refer to Assembly scanning. - Ensure the endpoint references NServiceBus version 4.0.0 or later. ServicePulse reports empty failed message groups RavenDB index could be disabled. This typically happens when disk space runs out. To fix this: - Put ServiceControl in maintenance mode. - Open the Raven Studio browser - Navigate to the Indexes tab - For each disabled index, set it's state to Normal. This assumes ServiceControl is using the default port and host name; adjust the url accordingly if this is not the case. Heartbeat failure in ASP.NET applications Scenario After a period of inactivity, a web application endpoint is failing with the message: Endpoint has failed to send expected heartbeat to ServiceControl. It is possible that the endpoint could be down or is unresponsive. If this condition persists restart the endpoint. When accessed, the web application is operating as expected. However shortly after accessing the web application, the heartbeat message is restored and indicates the endpoint status as active. Causes and solutions The issue is due to the way IIS handles application pools. By default after a certain period of inactivity, the application pool is stopped or, under certain configurable conditions, the application pool is recycled. In both cases the ServicePulse heartbeat is not sent anymore until a new web request comes in waking up the web application. There are two ways to avoid the issue: - Configure IIS to avoid recycling - Use a periodic warm-up HTTP GET to make sure the website is not brought down due to inactivity (the frequency needs to be less than 20 minutes, which is the default IIS recycle-on-idle time) Starting from IIS 7.5, the above steps can be combined into one by following these steps: - Enable AlwaysRunning mode for the application pool of the site. Go to the application pool management section, open the Advanced Settings, and in the General settings switch Start Modeto AlwaysRunning. - Enabled Preload for the site itself. Right click on the site, then Manage Site in Advanced Settings, and in the General settings switch Enable Preloadto true. - Install the Application Initialization Module. - Add the following to the web.config in the system.webServer node. <applicationInitialization doAppInitAfterRestart="true" > <add initializationPage="/" /> </applicationInitialization> In some cases, configuring IIS to avoid recycling is not possible. Here, the recommended approach is the second one. It also has the benefit of avoiding the "first user after idle time" wake-up response-time hit. Duplicate endpoints appear in ServicePulse after re-deployment This may occur when an endpoint is re-deployed or updated to a different installation path (a common procedure by deployment managers like Octopus). The installation path of an endpoint is used by ServiceControl and ServicePulse as the default mechanism for generating the unique Id of an endpoint. Changing the installation path of the endpoint affects the generated Id, and causes the system to identify the endpoint as a new and different endpoint. To address this issue, see Override host identifier. After enabling heartbeat plugins for NServiceBus version 3 endpoints, ServicePulse reports that endpoints are inactive Messages that were forwarded to the audit queue by NServiceBus version 3.x endpoints did not have the HostId header available which uniquely identifies the endpoint. Adding the heartbeat plugin for these endpoints automatically enriches the headers with this HostId information using a message mutator. Since the original message that was processed from the audit/error queue did not have this identifier, it is hard to correlate the messages received via the heartbeat that these belong to the same endpoint. Therefore there appears to be a discrepancy in the endpoints indicator. To address this issue: - Add the heartbeat plugin to all NServiceBus version 3 endpoints, which will add the required header with the host information. - Restart ServiceControl to clear the endpoint counter.
https://docs.particular.net/servicepulse/troubleshooting
2022-01-17T02:16:18
CC-MAIN-2022-05
1642320300253.51
[]
docs.particular.net
Getting Started Contents Getting Started¶ All questions are welcome in the Slack channel. Fugue is an abstraction framework that lets users write code in native Python or Pandas, and then port it over to Spark and Dask. This section will cover the motivation of Fugue, the benefits of using an abstraction layer, and how to get started. This section is not a complete reference, but will be sufficient enough to get started with writing full workflows in Fugue. Introduction¶ We’ll get started by introducing Fugue and the simplest way to use it with the transform() function. The transform() function can take in a Python or pandas function and scale it out in Spark or Dask without having to modify the function. This provides a very simple interface to parallelize Python and pandas code on distributed compute engines. Type Flexbility¶ After seeing an example of the transform() function, we look into the further flexibility Fugue provides by accepting functions with different input and output types. This allows users to define their logic in whatever makes the most sense, and bring native Python functions to Spark or Dask. Partitioning¶ Now that we have seen how functions can be written for Fugue to bring them to Spark or Dask, we look at how the transform() function can be applied with partitioning. In pandas semantics, this would be the equivalent of a groupby-apply(). The difference is partitioning is a core concept in distrubted compute as it controls both logical and physical grouping of data. Decoupling Logic and Execution¶ After seeing how the transform function enables the use of Python and pandas code on Spark, we’ll see how we can apply this same principle to entire compute workflows using FugueWorkflow. We’ll show how Fugue allows users to decouple logic from execution, and introduce some of the benefits this provides. We’ll go one step further in showing how we use native Python to make our code truly independent of any framework. Fugue Interface¶ In this section we’ll start covering some concepts like the Directed Acyclic Graph (DAG) and the need for explicit schema in a distributed compute environment. We’ll show how to pass parameters to transformers, as well as load and save data. With these, users will be able to start some basic work on data through Fugue. Joining Data¶ Here we’ll show the different ways to join DataFrames in Fugue along with union, intersect, and except. SQL and Pandas also have some inconsistencies users have to be aware of when joining. Fugue maintains consistency with SQL (and Spark). Extensions¶ We already covered the transformer, the most commonly used Fugue extension. Extensions are Fugue operations on DataFrames that are used inside the DAG. Here we will cover the creator, processor, cotransformer and outputter. Distributed Compute¶ The heart of Fugue is distributed compute. In this section we’ll show the keywords and concepts that allow Fugue to fully utilize the power of distributed compute. This includes partitions, persisting, and broadcasting. Fugue-SQL¶ We’ll show a bit on Fugue-Sql, the SQL interface for using Fugue. This is targeted for heavy SQL users and SQL-lovers who want to use SQL on top of Spark and Dask, or even Pandas. This is SQL that is used on DataFrames in memory as opposed to data in databases. With that, you should be ready to implement data workflows using Fugue. Ibis Integration (Experimental)¶ As a last note, we’ll show a nice addition to Fugue. The Ibis project provides a very nice pythonic way to express SQL logic, plus, it is also an abstraction layer that can run on different backend. We can make these two abstractions work together seamlessly so users can take the advantages of both. For full end-to-end examples, check out the Stock Sentiment and COVID-19 examples. For any questions, feel free to join the Slack channel.
https://fugue-tutorials.readthedocs.io/tutorials/beginner/index.html
2022-01-17T01:45:58
CC-MAIN-2022-05
1642320300253.51
[]
fugue-tutorials.readthedocs.io
You can now see all the logs from one workspace on the dedicated logs page, located near the executions page in analyze section of the navigational menu: While the most important logs are seen on the executions page, we decided no to spam you with everything else, and created a special page for all the logs. The new page lists all the logs for all the flows in the current workspace. Here you can filter the list by flow name, time, log level or use in-built search to find the logs you were looking for. For more details check our logs page. As a back-end to the logs page we are introducing a new API endpoint (still experimental) to request the same logs in the workspace. More about that in the Retrieve logs API endpoint section. As a back-end to the logs page we have a new, still experimental API endpoint v2/logs. You can use this endpoint to request the same logs you see in the new logs page. Here is the list of filters you can use with v2/logs endpoint: workspace_id(required) - The Workspace identifier flow_ids[]- Flow identifier from- Start Date of the period in ISO 8601format ( 2020-01-12T14:50:42.215Z) to- End Date of the period in ISO 8601format ( 2020-01-14T15:00:45.000Z) search- String to search in logs (searching string is wrapped by tag) offset- Number of items to skip from the beginning (defaults to 0) limit- Number of items to return (defaults is 100) levels[]- The logs level (1 - None, 10 - trace, 20 - debug, 30 - info, 40 - warn, 50 - error, 60 - fatal) For more information about the endpoint visit our API reference documentation. You can now switch between Integrator & Developer modes in the new mapper without losing any data that you have entered. However, if the root of the JSONata expression is a function, switching to Integrator mode is not possible since the Integrator mode has no functions support. For components, which use sailor version 2.6.0 and higher, you can now disable passthrough during the flow creation/editing via the UI: . Now you can use OAuth2 authentication method with the REST API component. Select OAuth2 as credential type. There are four mandatory fields ( client_Id, client_secret, auth_uri, token_uri) and two optional ( scopes and additional parameters). component.json) had an empty triggers object it would have appeared in the triggers selection during the flow design but not usable at all. Now we will not show that empty trigger of yours, you are welcome. Get New and Updated S3 Objects(aka Files). Polling By Timestamp. Get filenamesaction now returns any number of files. It was limited to 1,000 before. Get filenamesand Writeactions OAuth2authentication strategy {'result': %message%}. Introducing OpenAPI component for the platform. Make request OAuth2authentication strategy Lookup Objectsand Lookup Object (at most 1)actions Queryaction where the first response object was being sent multiple times instead of sending actual response objects A major update for the component: Queryand Lookup Objects Lookup Objectsand Upsertactions to support binary attachments dockerbuild type We upgraded 9 components to the latest Sailor version, to take advantage of the new logger, as well as to the new Docker build type:
https://docs.elastic.io/releases/20.07.html
2022-01-17T01:50:07
CC-MAIN-2022-05
1642320300253.51
[]
docs.elastic.io
You can now remove versions of components directly from the Build history UI of the repository in the developer teams in addition to the API endpoint. During the flow design process some components must connect with the third party services which might have slower response times or take longer to process the request. In that cases you can now define how long system must wait for the sample data retreival process for each component. DEBUG_TASK_TIMEOUTin seconds to control the time. FORCE_DESTROY_DEBUG_TASK_TIMEOUT_SECvariable. DEBUG_TASK_TIMEOUTvariables are set for the flow components (i.e. for mapper and component after), the system will take bigger value of them. You can now set your own component version to show in the platform UI when you use the version parameter without letter v in your component.json configuration. { "title": "Petstore API (Node.js)", "description": "Component for the Petstore API", "buildType":"docker", "version":"1.0.1", "etc_fields":"to show" } This would show as v1.0.1 when you select the component version while designing the flow, as well as in the list of deployed component versions in the repository of your developer team. After you use the component version the following rules would apply: versionparameter and enforce the semantic versioning rules. News in this section are for our customers who use OEM version of the elastic.io platform. With this release we simplified the registration process to work with email and password. We removed the required last name and company name parameters. The registration process can get the necessary details from OIDC parameters if allowed or construct from the entered email address following the logic: [email protected] [email protected]. abc, xyz.com. Please Note User can change their contract name as well as enter complete first and last names if desired. As areminder: You can enable the custom registration page in the tenant record following the instructions here. With this release we enforce asset invalidation of browser assets after every platform deployment. The following assets would now contain short git revision IDs: /webpack/filter-tree-worker..js /webpack/single-spa.config..js /frontend..min.js |), to the list of separators. Maester. v2.6.24.
https://docs.elastic.io/releases/21.19.html
2022-01-17T00:21:05
CC-MAIN-2022-05
1642320300253.51
[]
docs.elastic.io
HTTP Proxy Support HTTP proxy support feature is only available in HYPR Workforce Access client 2.8 Proxy Support for Windows Admins can configure proxy settings so that the HYPR Workforce Access client can communicate to the HYPR server. With the Workforce 2.8 release, the following will be supported: Configured proxy in Workforce Access Configured system wide WinHttp proxy Support for Proxy Auto Config (PAC) URL The above list also defines the order of precedence (i.e., the proxy is honored in the order described above). Configured Proxy in Workforce Access The HTTP proxy can be configured solely for Workforce Access via the following registry settings. Configured Systemwide WinHttp proxy A proxy can also be specified systemwide using the netsh winhttp set proxy command. For example, a proxy server of “proxy.nyoffice.hypr.com:808” along with a proxy bypass list of “.hypr.com,10.20.” can be set with the following command. netsh winhttp set proxy proxy.hypr.com:808 "*.hypr.com,10.20.*" The proxy can also be reset back to its default using the netsh winhttp reset proxy command. This command will restore the proxy settings back to direct access. netsh winhttp reset proxy Support for Proxy Auto Config (PAC) URL A proxy can also be configured to reference a PAC URL (i.e., a JavaScript file that is utilized to control proxy settings). In its simplest form, an example JavaScript file might look like the following. function FindProxyForURL(url, host) { return "PROXY proxy.nyoffice.hypr.com:808"; } Two means of configuring a PAC URL are implemented: 1) the PAC URL can be picked up from DNS or DHCP or 2) the PAC URL can be manually configured in Workforce Access with the following registry setting. The HTTP proxy can be configured solely for Workforce Access via the following registry settings. Updated over 1 year ago
https://docs.hypr.com/installinghypr/docs/http-proxy-support-in-windows
2022-01-17T01:27:37
CC-MAIN-2022-05
1642320300253.51
[]
docs.hypr.com
DBConcurrency Exception. Row Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets or sets the value of the DataRow that generated the DBConcurrencyException. public: property System::Data::DataRow ^ Row { System::Data::DataRow ^ get(); void set(System::Data::DataRow ^ value); }; public System.Data.DataRow? Row { get; set; } public System.Data.DataRow Row { get; set; } member this.Row : System.Data.DataRow with get, set Public Property Row As DataRow Property Value The value of the DataRow. Remarks Use Row to retrieve the value of the DataRow row that generated the DBConcurrencyException. Setting the value of the DataRow has no effect. When performing batch updates with the ContinueUpdateOnError property of the DataAdapter set to true, this exception is thrown if all row updates fail. In this case, this DBConcurrencyException contains DataRow objects for all rows whose update failed, rather than just the one DataRow object in Row, and the affected DataRow objects can be retrieved by calling CopyToRows. Serialization support does not exist for DataRow objects. Therefore, if you serialize a DBConcurrencyException object, the value of the Row property in the serialized version of the object is set to a null value.
https://docs.microsoft.com/en-us/dotnet/api/system.data.dbconcurrencyexception.row?view=netframework-4.8
2022-01-17T00:48:10
CC-MAIN-2022-05
1642320300253.51
[]
docs.microsoft.com
Connecting to a JDBC Database Define a connection using the Data Sources page. You can go to this page through Management Console or through Metadata Insights module. -, select type of database you want to connect to. The Spectrum™ Technology Platform Data Integration Module includes JDBC drivers for SQL Server, Oracle, and PostgreSQL databases. If you want to connect to a different database type, you must add the JDBC driver before defining a connection. - In the URL field, enter the JDBC connection URL. Your database administrator can provide this URL. For example, to connect to a MySQL database named "SampleDatabase" hosted on a server named "MyServer" you would enter: jdbc:mysql://MyServer/SampleDatabase - There may be additional fields you need to fill in depending on the JDBC driver. The fields represent properties in the connection string for the JDBC driver you selected in the Type field. See the JDBC driver provider's documentation or your database administrator for information about the specific connection properties and values required by the connection type. - Click Save. - Test the connection by checking the box next to the new connection and clicking the Test button .
https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/AdministrationGuide-WebUI/ClientTools/ConfiguringExternalResources/adding_connection.html
2022-01-17T00:22:03
CC-MAIN-2022-05
1642320300253.51
[]
docs.precisely.com
TwirPHP: PHP port of Twirp, Twitch’s RPC framework¶ Twirp is a “a simple RPC framework built on protobuf.” Unfortunately (or not?) it only supports Go and Python officially. While in the modern world it may be enough for most of the projects, there is still a considerable number of PHP-based softwares out there. TwirPHP is a PHP port of Twirp supporting both server and client side. It generates code the same way as Twirp does and follows the same conventions. Because of that this documentation only contains minimal information about how Twirp works internally. To learn more about it, you should check the following resources published by the Twirp developers themselves:
https://twirphp.readthedocs.io/en/latest/index.html
2022-01-17T02:20:18
CC-MAIN-2022-05
1642320300253.51
[]
twirphp.readthedocs.io
Transfer CFT 3.6 Users Guide Save PDF Selected topic Selected topic and subtopics All content Implement SSL with Sentinel This topic describes how to enable SSL connections between a Transfer CFT client and a Sentinel server when Central Governance is not implemented. To do this, you insert in the internal PKI base the CA certificate that is used to authenticate the Sentinel server. Prerequisites You require the root certificate of the Sentinel sever. See the Axway Sentinel Installation Guide for details. To manage the SSL session, ensure that the values in the following Transfer CFT UCONF parameters match those defined for the Sentinel server or Event Router: ssl.ciphersuites ssl.version_min ssl.extension.enable_sni Procedure Set the UCONF values as follows: Enable the SSL connection. CFTUTIL uconfset id=sentinel.xfb.use_ssl, value= Yes Insert the Sentinel server's root certificate in the local PKI database in the appropriate format, for example DER. PKIUTIL pkicer id=SENTINEL_ROOT, iname=<sentinel_root_certificate>, iform=DER Enter the root certificate value for the Sentinel server. CFTUTIL uconfset id=sentinel.xfb.ca_cert_id, value=SENTINEL_ROOT Related Links
https://docs.axway.com/bundle/TransferCFT_36_UsersGuide_allOS_en_HTML5/page/Content/interop/sentinel_ssl.htm
2022-01-17T01:06:35
CC-MAIN-2022-05
1642320300253.51
[]
docs.axway.com
it's preprocessor framework, providing a data preprocessor called HiveSplitGenerator that lets Splunk Analytics for Hadoop access and process data stored a metastore" in this topic. Splunk Analytics for Hadoop currently supports the following versions of Hive: - 0.10 - 0.11 - 0.12 - 0.13 - 0.14 - 1.2 the vix.hive.metastore.uris. Splunk Analytics for Hadoop uses the information in the provided Metastore server to read the table information, including column names, types, data location and format, thus allowing it to process the search request. Here's an example of a configured provider stanza that properly enables Hive connectivity. Note that a table contains one or more files, and that each virtual index could have multiple input paths, one for each table. [provider:BigBox] ... vix.splunk.search.splitter = HiveSplitGenerator vix.hive.metastore.uris = thrift://metastore.example.com:9083 [orders] vix.provider = BigBox vix.input.1.path = /user/hive/warehouse/user-orders/... vix.input.1.accept = \.txt$ vix.input.1.splitter.hive.dbname = default vix.input.1.splitter.hive.tablename = UserOrders vix.input.2.path = /user/hive/warehouse/reseller-orders/... vix.input.2.accept = .* vix.input.2.splitter.hive.dbname = default vix.input.2.splitter.hive.tablename = ResellerRrders In the rare case that the split logic of the Hadoop InputFormat implementation of your table is different from that of Hadoop's FileInputFormat, the HiveSplitGenerator split logic does not work. Instead, you must implement a custom SplitGenerator and use it to replace the default SplitGenerator. See Configure Splunk Analytics for Hadoop to use a custom file format for more information. Configure Splunk Analytics for Hadoop to use a custom file format To use a custom file format, you edit your provider stanza to add a .jar file that contains your custom classes as follows: vix.splunk.jars Note that if you don't specify a InputFormat class, files are treated as text files and broken into records by new-line character. Configure Splunk Analytics for Hadoop to read your Hive tables without connect: [your-provider] vix.splunk.search.splitter = HiveSplitGenerator [your-vix] vix.provider = your-provider vix.input.1.path = /user/hive/warehouse/employees/... vix.input.1.splitter.hive.columnnames = name,salary,subordinates,deductions,address vix.input.1.splitter.hive.columntypes = string:float:array<string>:map<string,float>:struct<street:string,city:string,state:string,zip:int> vix.input.1.splitter.hive.fileformat = sequencefile vix.input.2.path = /user/hive/warehouse/employees_rc/... Partitioning table data When using the Hive Metastore, about the partitions using key values as part of the file path. For example, the following configuration vix.input.1.path = /apps/hive/warehouse/sdc_orc2/${server}/${date_date}/... would extract and recognize a "server" and a "date_date" partitions in the following path /apps/hive/warehouse/sdc_orc2/idxr01/20120101/000859_0 Here is an example of a partitioned path that Splunk Analytics for Hadoop will automatically recognize the same partitions without any extra configuration /apps/hive/warehouse/sdc_orc2/server=idxr01/date_date=20120101/000859_0 This documentation applies to the following versions of Splunk® Enterprise:!
https://docs.splunk.com/Documentation/Splunk/7.0.7/HadoopAnalytics/ConfigureHivepreprocessor
2022-01-17T02:09:14
CC-MAIN-2022-05
1642320300253.51
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
We are maintaining an extensive list of devices as highlighted below, summarized from FCC's public database. Note that Citizens Broadband Radio Service (CBRS) is also known as LTE band 48 or the spectrum between 3.55-3.7GHz in the US. Test results, configuration steps and troubleshooting options for devices that have so far been successfully tested on Celona's CBRS LTE private mobile network are recorded as a separate collection of documents. Tablets and laptops Smartphones and handsets Wi-Fi Hotspots Indoor CPE Outdoor CPE Gateways and adapters Routers and edge appliances Chipset and module manufacturers Important to note, below are the chipset manufacturers who have been investing in the CBRS ecosystem and the details on different platforms they have started to offer to their customers, aka. the device makers. Qualcomm X16/20/24/55/60, 765, 835, 845, 855, 865, 690 & 8cx platforms Quectel 4G LTE Advanced and 5G IoT modules Sequans CB410L & CB610KL modules Cassiopeia LTE platform Sierra Wireless EM7411/75xx/7690/91xx & MC7411 modules, with Qualcomm inside Telit LM960 series PCI Express mini cards, with Qualcomm inside In the next article in this series, we review the wireless coverage guidelines for Celona's access points for CBRS LTE wireless.
https://docs.celona.io/en/articles/3484781-private-lte-5g-capable-devices-in-the-market
2022-01-17T00:49:30
CC-MAIN-2022-05
1642320300253.51
[]
docs.celona.io
Searching data / Building a query / Operations reference / Conversion group / Regular expression, regexp (re) Regular expression, regexp (re) Description Builds a regular expression from the given string. These regular expressions are based on a specific language that establishes patterns to help you match, locate and manage text. They can be used for several purposes, such as dividing a string into capturing groups. You can use the regular expressions generated using this operation in the Substitute (subs) and Substitute all (subsall) operations. The syntax used by Devo for this operation is Java syntax. Check the following link to know more about Java language and syntax to construct your own regular expressions. If you want a broader overview of the concept and uses of regular expressions, you can click the following link. Take care when using strings containing the \ escape character. For every \ in the string you must add \\\\ (4), resulting in a total of \\\\\ (5). This is because the Java compiler needs \\ and the regex engine also needs \\. Given messages like these already ingested in Devo: {\"request\":{\"Id\":23456,\"Email\":\"[email protected]\",\"Company\":\"Devo\",\"Team\":\"Marketing\"}} {\"request\":{\"Id\":34567,\"Email\":\"[email protected]\",\"Company\":\"Devo\",\"Team\":\"Sales\"}} {\"request\":{\"Id\":12345,\"Email\":\"[email protected]\",\"Company\":\"Devo\",\"Team\":\"Customer Support\"}} To retrieve the email address value, you can use this code: select peek(message, re("\\\\\"Email\\\\\":\\\\\"(.*?)\\\\\""),1) as email How does it work in the search window? Select Create column in the search window toolbar, then select the Regular expression, regexp operation. You need to specify one argument: The data type of the values in the new column is regexp. Example The first example, ";", 9) as regexStr In the upload.sample.data table, we want to transform the strings representing regular expressions in the regexStr column into regexp data type so we can use it later as the argument of another operation. To do it, we will create a new column using the Regular expression, regexp operation. The arguments needed to create the new column are: - Pattern - regexStr column Click Create column and you will see the following result: - A column in regexp data type that contains a pattern typically used to isolate the different parts of an email address. We can also create a column in the demo.ecommerce.data table that shows the regular expression ([0-9]+)\.* in regexp data type so we can use it later as an argument of another operation. To do it, we will create a new column using the Regular expression operation. Let's call it regex. The arguments needed to create the new column are: - Pattern - Click the pencil icon and enter ([0-9]+)\.* Click Create column and you will see the following result: - A column in regexp data type that contains a pattern typically used to isolate the different parts of an IP address. How does it work in LINQ? Use the operator as... and add the operation syntax to create the new column. This is the syntax for the Regular expression, regexp operation: re(string) Example You can copy the following LINQ script and try the previous examples on the my.upload.sample.data and demo.ecommerce.data tables from my.upload.sample.data select split(message, ";", 9) as regexStr, re(regexStr) as regex from demo.ecommerce.data select re("([0-9]+)\\.*") as regex
https://docs.devo.com/confluence/ndt/v7.5.0/searching-data/building-a-query/operations-reference/conversion-group/regular-expression-regexp-re
2022-01-17T01:36:57
CC-MAIN-2022-05
1642320300253.51
[]
docs.devo.com
Creating an M3UA Network Applies to version(s): v2.5, v2.6., v2.7 Now that you have created M3UA SAPs, you must create a new M3UA network to be used with them. To create an M3UA network: 1- Click Create New M3ua Network in the M3UA configuration panel 2- Configure the new M3UA network: - Enter a name for the network - Select a protocol type - Select a sub-service field - Select a DPC length - Select an SLS range - Click Create 3- Verify that the "M3ua Network was successfully created" message appears
https://docs.telcobridges.com/mediawiki/index.php?title=Toolpack:Creating_an_M3UA_Network_A&redirect=no
2022-01-17T02:11:38
CC-MAIN-2022-05
1642320300253.51
[]
docs.telcobridges.com
Home > Contribute > Contribute to the core database How to contribute code to the core database Contribution checklist Checklist to review before starting your contribution. Build from source Building binaries from source. Configure a CLion project Configure a project in CLion, a C/C++ IDE. Run unit tests Run unit tests. Coding style YugabyteDB coding style guide, primarily for C/C++. Ask our community Slack Github Forum StackOverflow
https://docs.yugabyte.com/latest/contribute/core-database/
2022-01-17T01:53:48
CC-MAIN-2022-05
1642320300253.51
[]
docs.yugabyte.com
• The use of an Evaluation Matrix™ as a framework for decision criteria • The use of Language Ladders™ as a clear and concise method to define individual criteria for decision-making purposes • A unique algorithm for combining the Language Ladder™ rankings of experts or peer reviewers involved in evaluating projects or proposals against the decision criteria • The use of high-impact visualization to depict in graphical form how projects or proposals compare with each other based on the composite evaluations of experts or peer reviewers See sample documents following in Sections 6.4 and 6.5 For additional details about the ProGrid® methodology, please refer to: • Intangibles: Exploring the Full Depth of Issues, by C.W. Bowman, published in 2005.
http://docs.futureinnovate.net/?page_id=235
2022-01-17T01:03:20
CC-MAIN-2022-05
1642320300253.51
[]
docs.futureinnovate.net
Changes compared to 21.9.10 New Feature - Danish translation Enhancements - Try to access Amazon S3 China regions over IPv6 if possible - Improve performance of Reindex and Deep Verify operations - Improve performance of Office 365 SharePoint site / OneDrive backup Bug Fixes - Fix an issue with missing support for Amazon S3 me-south-1 and af-south-1 regions - Fix a cosmetic issue with not refreshing the branding logo image in the Comet Server web interface immediately after changing it - Fix a cosmetic issue with unclear error messages if Windows chkdsk has filled one of Comet's data file with all zeroes
https://docs.cometbackup.com/release-notes/21-9-11-voyager-released/
2022-01-17T01:17:17
CC-MAIN-2022-05
1642320300253.51
[]
docs.cometbackup.com
You have the option to add collaboration features such as chat and document storage for this tenant. Project Creators can then decide if they would like to use this in each individual project. If you have enabled collaboration using Microsoft Teams, when projects are created in Loome Assist a team that consists of your project members will be created in Microsoft Teams. Project users can then use the chat and document storage features of this team for that project. To use Microsoft Teams as your collaboration platform, select it here and you will then select an agent. The managed identity configured in your agent will allow it to interact with Microsoft Teams. Important: Additional agent configuration is required to enable Microsoft Teams. If you select this platform, the PowerShell commands you will need to run against Azure to allow each agent to create teams within Microsoft Teams will be displayed in the script field below your chosen agent. Once you have selected an agent, you will have to copy and run the script here against Azure to allow the agent selected above to create teams within Microsoft Teams. You will then be able to use the collaboration you have set up for your tenant when creating or editing a project. When creating or editing a project, you will be asked whether you would like to add collaboration tools and document libraries after you have selected an account. Select ‘Yes’ to enable collaboration. If an administrator has set the collaboration settings to ‘No Collaboration’ you will not have this option to add collaboration tools and document libraries. Please contact your administrator to discuss your settings. Once you have selected your account and enabled collaboration tools and document libraries, Loome will check that the account agent has been configured to manage Microsoft Teams. Once the project is created, a private team for this project will be created in Microsoft Teams. The name of the team created in Microsoft Teams will be called ‘Loome Assist - “Your Project Name“’. In Microsoft Teams, Project Owners will be added as Owners, and all other roles will be added as Members. Please note that if you turn off collaboration settings, the project will be disassociated from the team. If you later decide to re-enable collaboration for a project, the previously existing group will be used. It will also modify members of the team according to the project membership. If a project is deleted while collaboration is turned on, it will also disassociate from Microsoft Teams. If you click on Collaborate or Documents in the ‘My Actions’ slide-out in your project, it will direct you to the ‘Posts’ and ‘Files’ tabs respectively in Microsoft Teams. To collaborate and view posts in Teams from this project, click on Collaborate. It will direct you to ‘Posts’ in Microsoft Teams. To access the document library in Teams from this project, click on Document Library. It will direct you to ‘Files’ in Microsoft Teams.
https://docs.loomesoftware.com/assist/collaboration/
2022-01-17T01:05:39
CC-MAIN-2022-05
1642320300253.51
[]
docs.loomesoftware.com
Deleting a User This procedure describes how to permanently delete a Spectrum™ Technology Platform user account. Tip: User accounts can also be disabled, which prevents the account from being used to access the system without deleting the account. - Open the Management Console. - Go to . - Check the box next to the user you want to delete then click the Delete button .Note: The user account "admin" cannot be deleted.
https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/AdministrationGuide-WebUI/ClientTools/ManagingUserAccounts/user_deleting.html
2022-01-17T01:25:27
CC-MAIN-2022-05
1642320300253.51
[]
docs.precisely.com
Decision Insight 20210719 Save PDF Selected topic Selected topic and subtopics All content Export a pagelet In Axway Decision Insight (DI), you can export the text-based contents of a dashboard pagelet in its current state in either Excel or CSV format. Supported pagelets The pagelet export tool is supported for the following pagelet types only: Pagelet type Export file format Datagrid The data displayed follows the same layout as the pagelet.For example, if you're exporting the results of a payments search, and you've sorted the results by descending amount in your pagelet, then the export file will also display the results based on your sort setting. Activity The acknowledge mashlet Html UI allows to enter a formatted comment. This rich comment will be rendered in the activity pagelet Excel export as much as possible: Excel supports the character formats Bold, Underline and Italic. Line alignment (left, center, right, justified) is not rendered as excel only support one alignment for the whole cell. For comments containing an hyperlink, only the displayed text of the hyperlink is exported, not the URL. Bullet list and ordered list are not supported in excel cell, they are simulated with bullet '•' chars. Instance ImageMap The data is displayed on one line only. For more information about pagelet types, see Pagelet types. Supported mashlets Whatever the chosen export format, the pagelet export tool only exports text-based data. As a consequence, classifier values are exported as text. Chart mashlets (Multihistorical, Historical Baseline, Spark charts...) can be exported when they are presented in an Instance or Image Map pagelet. If the chosen export format is Excel, then the first worksheet of the resulting Excel workbook will present all the supported non chart mashlets. There is an additionnal worksheet for each chart mashlet. Each chart mashlet will be referenced in this first worksheet, with a direct link to the worksheet containing the chart data.If the chosen export format is CSV, then the export will result in a zip, with one CSV file for the global pagelet, with a reference to the file names of the exported charts, then one additional CSV file for each chart. Additional Excel worksheets or CSV files are named after the mashlet's "Header" label. Spark chart present in a Datagrid pagelet will not be exported. Action button and relation editors will not be rendered. For more information about mashlet types, see Mashlet types. Export a pagelet To export a pagelet, follow these steps. Step Action 1 On the main menu, click the All dashboards button. 2 Select a dashboard. 3 Once you're on the desired dashboard screen, click the Actions button on the top-right side of your screen. 4 Click Export.The Export pagelet pop-up screen is displayed. 5 In the Locale drop-down, select the relevant locale. This is important for CSV export as the CSV separator varies from one locale to the next. 6 In the Time zone drop-down, select your time zone. 7 In the Pagelet list, hover over the different pagelets. As you hover on a pagelet in the list, the corresponding pagelet is highlighted in the dashboard. This is particularly useful if the pagelets in your dashboard are untitled. Once you've found the pagelet to export, click Export to Excel or Export to CSV as needed. 8 When asked to save the file, click Yes. The pagelet export file is generated and you can choose where to save the file. Maximum number of rows in a datagrid pagelet export In a dashboard, the maximum number of rows displayed in a datagrid pagelet is limited based on a platform.properties setting. The default number is 500. However, when exporting a pagelet, the output file will contain all returned rows even if there are more than the default max results numbers in the list displayed on the user interface. To limit the number of rows in a pagelet export, restrict the row count of a pagelet. For example, if the row count of a pagelet is limited to 100, then the export file for this pagelet will also contain 100 rows only. To modify the row count of a pagelet, follow these steps. Step Action 1 Edit your pagelet. 2 On the left menu, click Format. 3 In the maximum number of rows displayed text box, enter the maximum number of rows allowed for this pagelet. 4 Click Done, then Save. Related Links
https://docs.axway.com/bundle/DecisionInsight_20210719_allOS_en_HTML5/page/export_a_pagelet.html
2022-01-17T00:51:46
CC-MAIN-2022-05
1642320300253.51
[]
docs.axway.com
This guide is designed to show the basic steps required and best practices to begin selling domain names. Here we describe installing a domain registrar module, configuring packages and package groups, and creating an order form. The first step toward selling domains is to configure a domain registrar module. You can do this by going to [Settings] > [Company] > [Modules] > [Available] and select the domain module of your choice. We're going to be using the LogicBoxes module for this tutorial. After installing the module you can proceed to add the module settings. Click on "Add Account" and you will be presented with the following: When you have entered the following you will be good to go: Registrar Name: This can be anything you want it to be as it's for internal use (If you have more than one account). Reseller ID: This is your logicboxes reseller ID (You can find this on the top right dropdown with the user icon under your email address). Key: This is the reseller API key provided by the registrar when you click "Get API key" you can find this under [Settings] > [API] > [View API Key]. Ensure you have entered your Blesta installation IP in the API section otherwise you won't be able to connect. The second step is to create a package group that will contain all of your domain registration packages, you can do this by going to [Packages] > [Groups] > [Add Group] The third step is to create a package that your customers will order with. You can do this two ways: You can create a package by going to [Packages] > [Browse Packages] > [Add Package] . The Package contains all the important information from the pricing and name to the welcome email the client receives after the order has been provisioned. Basic The Basic section consists of the following options: The Module Options section consists of the following options, which are specific to LogicBoxes: Click the "Add Additional Price" to add more pricing options. It's common for people to create 1 Month, 3 Month, 6 Month, and 1 Year price options. There are many possible combinations. The . When creating or editing a package that uses this module, the following tags will be available:. The final step is to create the order form where your customers can purchase your domain services. You can create an order form by going to [Packages] > [Order Forms] > [Add Order Form] . The Basic section consists of the following options: The Domain Package Group section consists of the following options: The Currencies and Gateways section consists of the following options: And there we have it, you have completed your first order form for selling domains with Blesta.
https://docs.blesta.com/plugins/viewsource/viewpagesrc.action?pageId=2621862
2022-01-17T01:48:00
CC-MAIN-2022-05
1642320300253.51
[]
docs.blesta.com
# Manage your organization integrations You can add the following Kubernetes clusters as an integration to deploy your nodes and networks to: - Amazon Elastic Kubernetes Service (EKS) (opens new window) - Azure Kubernetes Service (AKS) (opens new window) - Google Kubernetes Engine (GKE) (opens new window) - Self-managed Kubernetes cluster. While Chainstack supports Amazon EKS out of the box, to add other Kubernetes clusters, please contact us. The integrations feature is available on the Business and Enterprise subscription plans. For instructions on how to create an IAM user with programmatic access, node group size requirements, and how to set up your Amazon EKS cluster for Chainstack integration, see Setting up an Amazon EKS cluster to integrate with Chainstack. # Add Amazon EKS as an integration - On Chainstack, click Integrations. - Provide any name. - Under Type, select Amazon Elastic Kubernetes Service. - Select the region where you have a deployed Amazon EKS service. - Provide the access key and secret of the user you have specifically created for the Chainstack integration. - Provide the Kubernetes namespace you have specifically created for the Chainstack integration. - Provide any domain name for the deployment. This domain name will be a part of your node's endpoint URL. - Click Add connection. Once the integration is validated, you can use it to deploy your nodes and networks. # Delete an integration - On Chainstack, click Integrations. - Select an integration. - Click Edit > Delete. This will delete Chainstack-added Pods from the cluster. This will not delete the cert-manager.
https://docs.chainstack.com/platform/manage-your-organization-integrations
2022-01-17T01:49:06
CC-MAIN-2022-05
1642320300253.51
[]
docs.chainstack.com
An Act to renumber 97.24 (3); to renumber and amend 97.22 (8); to amend 97.20 (2) (e) 1., 97.22 (2) (a), 97.24 (2) (a) and 97.24 (2) (b); and to create 97.22 (2) (d), 97.22 (8) (bm), 97.24 (2m), 97.24 (3) (b) and 97.29 (1) (g) 1m. of the statutes; Relating to: the sale of unpasteurized milk and unpasteurized milk products and an exemption from requirements for certain dairy farms. (FE)
https://docs.legis.wisconsin.gov/2015/proposals/sb721
2022-01-17T01:01:50
CC-MAIN-2022-05
1642320300253.51
[]
docs.legis.wisconsin.gov
Use Azure AD groups to manage role assignments Azure Active Directory (Azure AD) lets you target Azure AD groups for role assignments. Assigning roles to groups can simplify the management of role assignments in Azure AD with minimal effort from your Global Administrators and Privileged Role Administrators. Why assign roles to groups? Consider the example where the Contoso company has hired people across geographies to manage and reset passwords for employees in its Azure AD organization. Instead of asking a Privileged Role Administrator or Global Administrator to assign the Helpdesk Administrator role to each person individually, they can create a Contoso_Helpdesk_Administrators group and assign the role to the group.. How role assignments to groups work To assign a role to a group, you must create a new security or Microsoft 365 group with the isAssignableToRole property set to true. In the Azure portal, you set the Azure AD roles can be assigned to the group option to Yes. Either way, you can then assign one or more Azure AD roles to the group in the same way as you assign roles to users. Restrictions for role-assignable groups Role-assignable groups have the following restrictions: - You can only set the isAssignableToRoleproperty or the Azure AD roles can be assigned to the group option for new groups. - The isAssignableToRoleproperty is immutable. Once a group is created with this property set, it can't be changed. - You can't make an existing group a role-assignable group. - A maximum of 400 role-assignable groups can be created in a single Azure AD organization (tenant). How are role-assignable groups protected? If a group is assigned a role, any IT administrator who can manage group membership could also indirectly manage the membership of that role. For example, assume that a group named Contoso_User_Administrators is assigned the User Administrator role. An Exchange administrator who can modify group membership could add themselves to the Contoso_User_Administrators group and in that way become a User Administrator. As you can see, an administrator could elevate their privilege in a way you did not intend. Only groups that have the isAssignableToRole property set to true at creation time can be assigned a role. This property is immutable. Once a group is created with this property set, it can't be changed. You can't set the property on an existing group. Role-assignable groups are designed to help prevent potential breaches by having the following restrictions: - Only Global Administrators and Privileged Role Administrators can create a role-assignable group. - The membership type for role-assignable groups must be Assigned and can't be an Azure AD dynamic group. Automated population of dynamic groups could lead to an unwanted account being added to the group and thus assigned to the role. - By default, only Global Administrators and Privileged Role Administrators can manage the membership of a role-assignable group, but you can delegate the management of role-assignable groups by adding group owners. - RoleManagement.ReadWrite.Directory Microsoft Graph permission is required to be able to manage the membership of such groups; Group.ReadWrite.All won't work. - To prevent elevation of privilege, only a Privileged Authentication Administrator or a Global Administrator can change the credentials or reset MFA for members and owners of a role-assignable group. - Group nesting is not supported. A group can't be added as a member of a role-assignable group. Use PIM to make a group eligible for a role assignment If you do not want members of the group to have standing access to a role, you can use Azure AD Privileged Identity Management (PIM) to make a group eligible for a role assignment. Each member of the group is then eligible to activate the role assignment for a fixed time duration. Note For privileged access groups that are used to elevate into Azure AD roles, we recommend that you require an approval process for eligible member assignments. Assignments that can be activated without approval might create a security risk from administrators who have a lower level of permissions. For example, the Helpdesk Administrator has permissions to reset an eligible user's password. Scenarios not supported The following scenarios are not supported: - Assign Azure AD roles (built-in or custom) to on-premises groups. Known issues The following are known issues with role-assignable groups: - Azure AD P2 licensed customers only: Even after deleting the group, it is still shown an eligible member of the role in PIM UI. Functionally there's no problem; it's just a cache issue in the Azure portal. - Use the new Exchange admin center for role assignments via group membership. The old Exchange admin center doesn't support this feature yet. Exchange PowerShell cmdlets will work as expected. - Azure Information Protection Portal (the classic portal) doesn't recognize role membership via group yet. You can migrate to the unified sensitivity labeling platform and then use the Office 365 Security & Compliance center to use group assignments to manage roles. - Apps admin center doesn't support this feature yet. Assign users directly to Office Apps Administrator role. License requirements Using this feature requires an Azure AD Premium P1 license. To also use Privileged Identity Management for just-in-time role activation, requires an Azure AD Premium P2 license. To find the right license for your requirements, see Comparing generally available features of the Free and Premium editions.
https://docs.microsoft.com/sr-Latn-RS/azure/active-directory/roles/groups-concept
2022-01-17T01:18:13
CC-MAIN-2022-05
1642320300253.51
[array(['media/groups-concept/role-assignable-group.png', 'Screenshot of the Roles and administrators page'], dtype=object)]
docs.microsoft.com
Spectrum Technology Platform Updates Spectrum Technology Platform updates provide fixes and enhancements between major releases and service packs. We recommend that you install all updates that apply to the platform and all modules you have installed. You do not need to install updates that apply to modules you haven't installed. Always review the release notes before installing a update. The release notes contain information about the contents of the update as well as installation instructions. Important: You must install updates in order. For example you must install S01 before S02, and S02 before S03, and so on. You have access only to the modules you have licensed. To evaluate any other modules, contact your Precisely account executive for a trial license key.
https://docs.precisely.com/docs/sftw/spectrum/ProductUpdateSummary/ProductUpdateSummary/source/Introduction.html
2022-01-17T00:26:16
CC-MAIN-2022-05
1642320300253.51
[]
docs.precisely.com
Quickly Add an Accelerometer with SPI Overview IOT (Internet of Things) solutions often use accelerometer sensors, such as an IMU (Inertial Measurement Unit). The EFR32BG22 Thunderboard (kit SLTB010A) has an on board 6-axis ICM-20648 sensor from vendor TDK InvenSense. This lab shows how to quickly and efficiently add the accelerometer sensor. Topic Covered - Accelerometer (Inertial Measurement Unit) sensor - Software component - EFR Connect app Getting Started Ensure that you have the correct hardware and software prepared to successfully complete the lab. Hardware Requirements - Silicon Labs EFR32BG22 Thunderboard kit: SLTB010A The kit includes the following: - EFR32BG22C224F512IM40 SoC - 6-axis IMU (Inertial Measurement Unit): TDK InvenSense ICM-20648 with SPI interface - A mobile device for installing EFR Connect Mobile App: Android or Apple iPhone/iPad Software Requirements - Simplicity Studio v5 - Gecko SDK Suite v3.1 (GSDK) or above with the Bluetooth Stack (v3.1.1) installed - EFR Connect Mobile App Install Tools Download and install Simplicity Studio v5 if not already installed. Ensure that you have GSDK 3.1.x and Bluetooth Stack installed. Connect your Hardware Attach the EFR32BG22 Thunderboard kit to the PC with Simplicity Studio installed by using a USB cable (not a charging cable). Connecting between the PC host USB port and the J-Link USB port (USB micro) on the kit. USB J-Link Hardware and Software Introduction Connection between IMU Sensor and EFR32BG22 See the EFR32BG22 Thunderboard schematic for detailed hardware information (schematic for EFR32BG22 Thunderboard in PDF format). Hardware Connection The following are the EFR32BG22 pins used to connect ICM-20648: - SPI interface: - SPI_MOSI (PC00) - SPI_MISO (PC01) - SPI_SCLK (PC02) - SPI_CS (PB02) - IMU_INT (PB03) - IMU_ENABLE (PB04) EFR32BG22 USART(SPI) Peripheral EFR32BG22 has 2 USART peripheral instances. The USART peripheral is a flexible I/O module, which supports the following modes: - Full duplex asynchronous UART communication with hardware flow control as well as RS-485 - SPI - Others See the EFR32xG22 Wireless Gecko Reference Manual Section 21 on how to use this peripheral. The ICM-20648 sensor on EFR32BG22 Thunderboard uses the SPI interface. The sensor also supports I2C Fast Mode. Accelerometer Sensor ICM-20648 This sensor provides the following: - Orientation, 3-axis gyroscope. - Acceleration, 3-axis accelerometer. Note: The sensor supports features such as calibration, auxiliary I2C interface, and others. In this lab, features like calibration were not included. Software Architecture Below is an overview of the software architecture: Software Architecture The figure shows the following: - Customer application code, which is custom developed by users according to their application requirements - Driver abstract, which is code developed by Silicon Labs. It contains routines provided by vendors that are “repacked” to simplify calling from the application. - Platform-independent drivers, which is code developed by both IMU vendors and Silicon Labs to connect the IMU device with multiple Silicon Labs devices. It also supports setup for registers specific to the IMU device. - Platform-specific drivers, which is code developed by Silicon Labs that executes drivers using appropriate low-level peripherals that may differ depending on a device/board. - Hardware, which are Silicon Labs peripherals on EFR32/EFM32 devices. Lab Creating the Project - If the EFR32BG22 Thunderboard isn't plugged into the PC using the USB cable, do so now. - In the Launcher->Debug Adapterswindow, click on the Thunderboard EFR32BG22 (ID:xxxxxxxxx). The kit(end with SLTB010A), board(end with BRDxxxxx Rev Axx) and device(EFR32BG22CxxxFxxxxxxx) debug information should be displayed in the Launcher->Debug Adapterswindow. Debug Adapters Window - Information about the target hardware and software will appear in Launcher->Overviewtab (together with the Adapter FWand Secure FWversion). If this does not appear, click on the Launcherbutton in the top right corner. Note: If the Secure FW shows as Unknown, click on Read FW Version on the right side of it to get the version. You may also upgrade the Adapter FW to the latest version. - Select the Preferred SDKto the latest version. For this lab, the latest version of Gecko SDK Suite v3.1.1is used. - Click on Create New Projectin the upper right-hand corner. A New Project Wizardwindow should appear. Note: If you already have projects in the workspace, you may not see the Create New Project button. You can try other options to create a new project, e.g., File->New->Silicon Labs New Project Wizard... or 'Examples Projects & Demos' tab. Launcher Overview - For this lab, the Bluetooth - SoC Emptyproject is used as the starter project. Scroll and select Bluetooth - SoC Empty. Note: To filter the projects, Select/Checked the Bluetooth for the Technology Type and input empty for Filter on keywords. Project Filter - Click Nextto move on. - Rename the project under Project name. For this lab, name the project soc_spi_acc. - Select (check) Copy contentsunder With project filesto copy the project files into your project. This makes version control easier to manage and future updates to the Simplicity Studio libraries will not impact the copied files in this project. - Check Use default location(workspace). - Click Finishto generate the project. Project Rename - The IDE perspectivelaunched automatically. - You can now see gatt_configuration.btconf, soc_spi_acc.slcpand readme. IDE Window Summary of Previous Steps Congratulations! The SoC Empty is successfully created. Additionally, the Bluetooth - SoC Empty project will pre-install some software components. To see what was installed, check the Installed Components under Software Components. Pre-Installed Components You will see that the following components were installed. - Advanced Configurators->Bluetooth GATT Configurator - Bluetooth->OTA->AppLoader - Platform->Services->Sleep Timer, and so on If you don't see these components, make sure that you followed the procedure described above. Installing the IMU Sensor Software Components - Select the Software Componentstab on the top. Scroll downto the different sections (such as Platform). Note all of the components that you can easily install for your application. - Install the following components using the Installbutton, as shown in the image. The process is repeated for all components that need to be added. Services->IO Stream->IO Stream: USART(dependency) Platform->Board Drivers->IMU - Inertial Measurement Unit Bluetooth->Sensor->Inertial Measurement Unit sensor Bluetooth->GATT->Inertial Measurement Unit GATT Service For example, take a look at the information shown on the right side of the view for the IMU - Inertial Measurement Unit component. Usually information such as Description, Quality level, and Dependencies of the component is provided. The view also provides a good overview of all functions that are defined/added for this component. If you click the View Dependencies button, you can see that one of dependency components is Platform->Board Drivers->ICM20648 - Motion Sensor. Installing the component IMU - Inertial Measurement Unit will also automatically install Platform->Board Drivers->ICM20648 - Motion Sensor. Dependency Component Note: You can input a keyword on the upper right side of Search keywords, component's name box to filter the component. For example, the keyword inertial will cause only the last three components above to be visible. I/O Stream IMU - Inertial Measurement Unit GATT Service Note: Although IMU - Inertial Measurement Unit is classified as a Board Driver, you can still install it even when using a custom board. However, you must configure the dependency component Platform->Board Drivers->ICM20648 - Motion Sensor to match pinouts of your custom board. Otherwise, the build will report errors, as will be shown in the code explanation section. Note: Installing the Inertial Measurement Unit sensor adds sl_sensor_imu.c/h files. These files are dependent on the Board Control. In the future, you will be able to add support for a custom board by adding a Board Control file to the same location where you located the option for the Thunderboard BG22. Board Control Note: Inertial Measurement Unit GATT Service configuration button should link to the btconf->gatt_service_imu.xml. In the future, you will be able to add support for a custom board. IMU GATT Service Configuration Summary of Previous Steps IMU - Inertial Measurement Unit Component After you add/install the IMU - Inertial Measurement Unit, you will see that files, such as sl_icm20648.c and sl_icm20648.h were added. Note: - These are driver files prepared by Silicon Labs for ICM-20648. The vendor of the sensor commonlyprovides this driver. Usually, modifications will need to be made. - For example, for ICM-20648, InvenSense provides the Software User Guide For ICM-20×48 eMD. - If you use sensors from another vendor, implement a similar driver yourself. This component also adds the driver files sl_imu_dcm/sl_imu_math.c/sl_imu_fuse.c/sl_imu.c, which are also prepared by Silicon Labs for ICM-20648. IMU Driver 1 IMU Driver 2 Inertial Measurement Unit sensor Component After you add/install the Inertial Measurement Unit sensor, you will see that additional files were added. IMU Driver 3 Inertial Measurement Unit GATT Service Component After you add/install the Inertial Measurement Unit GATT Service, you will see that additional files were added. GATT Service Note: This will add Acceleration and Orientation service. In the file sl_event_handler.c, you can see the API sl_gatt_service_imu_step was added into the routine sl_internal_app_process_action. This will transmit Acceleration and Orientation data to a client when the client subscribe the service. void sl_internal_app_process_action(void) { sl_gatt_service_imu_step(); } GATT Configuration - Go back to the gatt_configuration.btconffile (tab). If you have closed this tab, you can re-open it either via Project Exploreror Software Components->Advanced Configurators->Bluetooth GATT Configurator. GATT Configuration 1 GATT Configuration 2 - You can rename the device namevia Custom BLE GATT->Generic Access->Device Name->Value settings->Initial value. Here, it is renamed to spi_acc. Device Name - Later in the lab, EFR Connect mobile app will display the Accelerationand Orientationvalues under their UUIDs. You can write them down to verify which one you are viewing. Note: You can simplify by recording the first 5 unique characters of the UUID. Accel = c4c1f6; Orient = b7c4b6 GATT Acceleration Adding the Project Source Files - Copy the provided app.c file to the top level of the project. The source files and code details are found at the Code Explanationsection. app.cwill overwrite the existing file to add the new application. The source files can be dragged and dropped into Simplicity Studio or placed in this file path: C:\Users\user_account\SimplicityStudio\v5_workshop\soc_spi_acc Note: Where user_account is the default workspace and Simplicity Studio installation path. You can also edit the app.c file manually. Build and Flash the Project - Build the project by clicking on the hammericon in the top left corner of the Simplicity Studio IDE perspective. Build with the Hammer Icon - Right-click on the hexfile (under GNU ARM xxx - Debugor Binaries) and select Flash to Device...to make the Flash Programmerwindow appear. Flash to Device Note: hex image is recommended here rather than bin image. If you choose 'bin', you need explicitly give the base address in the Flash Programmer. Flash Programmer Note: If a Device Selection window appears, select the correct device. - Click Programto flash the device. Note: The EFR32BG. Then, select OK. Flash Programmer Usage Connecting with EFR Connect App - Open the mobile app EFR Connectand select the Develop->Browser. EFR Connect Develop - With the EFR Connect App, Connectto the device. EFR Connect Browser Note: If multiple Bluetooth devices are visible: - You may try to get the MACof the device via Simplicity Commander( Serial Number) first. - You may also use device name(step 18 above) to determine which device to connect to. - You may also filter the scanning via RSSIstrength. Simplicity Commander RSSI Strength Note: If the board is not found, press the reset button on the kit or click Stop Scanning then Start Scanning in the app. - Click the Service->Characteristic->Notifybutton. Here, the Service and Characteristic are shown as Unknownbecause they are not standard. Click the 'Notify' bell at the bottom of the Characteristic to let the app (as client) subscribethe service. Click Notify Note: You should already have the UUID in step 19. - You should see the sensor data get updated regularly. You can change the orientation of the Thunderboard or shake it (change acceleration) to see this change. IMU Data Update Note: observe “Average” of value in one position. Then, change orientation and you should notice a change in the average value. Code Explanation The following sections explain critical lines of code pertinent to this lab. The code can be found in different files (driver). Accelerometer (ICM-20648) Driver sl_icm20648_config.h This is a header file generated automatically by the Simplicity Studio Pintool/Software Component. You may need to change the pin map based on your hardware. Use the Software Components->Platform->Board drivers->ICM20648 - Motion Sensor->Configure to change it. Driver Pintool The pin map for ICM-20648 is here: Driver Pinmap Note: It matches the schematic mentioned earlier in the document, section Connection between IMU sensor and EFR32BG22. sl_icm20648.c This file is located in folder, as follows: C:\SiliconLabs\SimplicityStudio\v5\developer\sdks\gecko_sdk_suite\v3.1\hardware\driver\icm20648\src This is the driver file for ICM-20648 sensor prepared by Silicon Labs. Driver File Path 1 If you use a sensor from another vendor, you may need to consider implementing the similar driver for it. Consider contacting the vendor for help to implement the driver. sl_imu_dcm.c/sl_imu_math.c/sl_imu_fuse.c/sl_imu.c/sl_sensor_imu.c These files are re-packed of the API provided in the driver sl_icm20648.c. The high-level code ( app.c and other) calls APIs, such as sl_sensor_imu_init, sl_sensor_imu_get, and others provided in the file sl_sensor_imu.c/h to initialize, enable/disable the IMU sensor, and read sensor data. These files are in the following folders: C:\SiliconLabs\SimplicityStudio\v5\developer\sdks\gecko_sdk_suite\v3.1\hardware\driver\imu\src C:\SiliconLabs\SimplicityStudio\v5\developer\sdks\gecko_sdk_suite\v3.1\app\bluetooth\common\sensor_imu Driver File Path 2 Driver File Path 3 Note: sl_sensor_imu.c/hprovides APIs for the app, which are added after installing the component Inertial Measurement Unit sensor. sl_imu_dcm.c/sl_imu_math.c/sl_imu_fuse.c/sl_imu.cwere added after installing the component IMU - Inertial Measurement Unit. sl_icm20648.cwas added after installing the ICM20648 - Motion Sensor. This is a dependency component of the IMU - Inertial Measurement Unit. Application (app.c) The SoC Empty project generates a default app.c source file with a skeleton Bluetooth event handler. The app.c file provided for this lab adds code to handle the BLE connection and notifications. Connection Opened The IMU sensor is initialized and enabled when the event sl_bt_evt_connection_opened_id is received. The IMU sampling does not start until a connection has been made and the user has enabled GATT Acceleration Notification (or Orientation Notification) characteristics. /* place 1, code added for accelerometer workshop */ static void sensor_init(void) { sl_sensor_imu_init(); sl_sensor_imu_enable(true); } // ------------------------------- // This event indicates that a new connection was opened. case sl_bt_evt_connection_opened_id: /* place 4, code added for accelerometer workshop */ sensor_init(); break; Connection Closed When the connection is closed, the sl_bt_evt_connection_closed_id event is triggered. To save power when no devices are connected, the sensor was disabled via sensor_deinit function. /* place 2, code added for accelerometer workshop */ static void sensor_deinit(void) { sl_sensor_imu_deinit(); } // ------------------------------- // This event indicates that a connection was closed. case sl_bt_evt_connection_closed_id: // Restart advertising after client has disconnected. sc = sl_bt_advertiser_start( advertising_set_handle, advertiser_general_discoverable, advertiser_connectable_scannable); sl_app_assert(sc == SL_STATUS_OK, "[E: 0x%04x] Failed to start advertising\n", (int)sc); /* place 5, code added for accelerometer workshop */ sensor_deinit(); break; Sensor Data Read The code below placed in app.c will periodically read sensor data. /* place 3, code added for accelerometer workshop */ sl_status_t sl_gatt_service_imu_get(int16_t ovec[3], int16_t avec[3]) { return sl_sensor_imu_get(ovec, avec); } This was called by sl_internal_app_process_action->sl_gatt_service_imu_step. It overrides the weak routine sl_gatt_service_imu_get in the sl_gatt_service_imu.c. SL_WEAK sl_status_t sl_gatt_service_imu_get(int16_t ovec[3], int16_t avec[3]) { (void)ovec; (void)avec; return SL_STATUS_FAIL; } Pay special attention to the variable imu_state in the file sl_gatt_service_imu.c. event sl_bt_evt_connection_closed_id and sl_bt_evt_gatt_server_characteristic_status_id will change its value. App Code Call Hierarchy App Code Call Hierarchy BLE Notification After the user has enabled GATT notifications to the characteristic, the sl_bt_evt_gatt_server_characteristic_status_id event is triggered. In this event, the device will periodically update the characteristic value until the device disconnects. sl_event_handler.c This file was automatically generated and is in the following folder (workspace): C:\Users\delu\SimplicityStudio\v5_workshop\soc_spi_acc\autogen void sl_internal_app_process_action(void) { sl_gatt_service_imu_step(); } sl_gatt_service_imu.c This file is in the following folder: C:\SiliconLabs\SimplicityStudio\v5\developer\sdks\gecko_sdk_suite\v3.1\app\bluetooth\common\gatt_service_imu void sl_gatt_service_imu_step(void) { if (imu_state) { if (SL_STATUS_OK == sl_gatt_service_imu_get(imu_ovec, imu_avec)) { if (imu_acceleration_notification) { imu_acceleration_notify(); } if (imu_orientation_notification) { imu_orientation_notify(); } } } } This routine calls imu_orientation_notify and imu_acceleration_notify to send the sensor data to the client. BLE Notification Call Hierarchy BLE Notification Call Hierarchy Source app.c Porting Considerations Other Sensors EFR32BG22 Thunderboard also integrates other sensors: - Silicon Labs Relative humidity & temperature sensor: I2C Si7021 - Silicon Labs UV and ambient light sensor: I2C Si1133 - Silicon Labs Hall effect sensor: I2C Si7210 If your solution needs these sensors, you may use a similar procedure to add them. Bootloader The application generated via Bluetooth SoC Empty project doesn't include the bootloader. Program the bootloader to the device first. In some cases, the bootloader may be missing from the device if it has been completely erased. If that happens, do the followingS: - Open the Flash Programmerand program the bootloader found here: C:\SiliconLabs\SimplicityStudio_v5\developer\sdks\gecko_sdk_suite\v3.1\platform\bootloader\sample-apps\bootloader-storage-internal-single-512k\efr32mg22c224f512im40-brd4182a\bootloader-storage-internal-single-512k.s37 - Flash a demo example first (like Bluetooth SoC - Thunderboard EFR32BG22), then flash the application. IMU Sensor Data Interpretation The EFR Connect shows the sensor raw data. To interpret data, see the sensor and driver data sheets. To show a meaningful sensor data, look at the source code of the Thunderboard app. Reference Peripheral Examples Gecko Platform Documentation Simplicity Studio v5 User's Guide
https://docs.silabs.com/bluetooth/3.3/lab-manuals/quickly-add-an-accelerometer-with-spi
2022-01-17T00:57:37
CC-MAIN-2022-05
1642320300253.51
[]
docs.silabs.com
New to Telerik Reporting? Download free 30-day trial CSV Rendering Design Considerations The Comma-Separated Value (CSV) rendering output reports as a flattened representation of a report data in a plain-text format. The rendering extension uses a string character delimiter (, - comma) and rows (the environment new line character) to separate fields. The string character delimiter is configurable to be a character other than a comma. The exported report becomes a .csv file with a MIME type of text/plain. The resulting file can be opened in a spreadsheet program or used as an import format for other programs. When rendered using the default settings, a CSV report has the following characteristics: - The first record contains headers for all the columns in the report (the items' name, not value). - All rows have the same number of columns. - The default field delimiter string is a comma (,). You can change the field delimiter to any character that you want, by changing the device information settings. For more information, see CSV Device Information Settings. - The record delimiter string is taken from the environment. It is the newline character for the corresponding operating system (OS). For example, for Windows OS this is the carriage return and line feed '\r\n'. For Linux OS this is the '\n' character. - The text qualifier string is a quotation mark ("). - If the text contains an embedded delimiter string or qualifier string, the text qualifier is placed around the text, and the embedded qualifier strings are doubled. - Formatting and layout are ignored. The following items are ignored during processing: - Page Header Section - Page Footer Section - Shape - Cross-section Item - PictureBox - Chart The remaining report items appear as ordered in the parent’s item collection. Each item is then rendered to a column. If the report has nested report, the parent items are repeated in each record. The following table lists the considerations applied to items when rendering to CSV: Flattening the Hierarchical and Grouped Data Hierarchical and grouped data must be flattened in order to be represented in the CSV format. The rendering extension flattens the report into a tree structure that represents the nested groups within the data item. To flatten the report: A row hierarchy is flattened before a column hierarchy. Columns are ordered as follows: text boxes in body order left-to-right, top-to-bottom followed by data items ordered left-to-right, top-to-bottom. In a data item, the columns are ordered as follows: corner members, row hierarchy members, column hierarchy members, and then cells. Peer data items are data items or dynamic groups that share a common data item or dynamic ancestor. Peer data is identified by branching of the flattened tree. Interactivity This rendering extension does not support any interactive features.
https://docs.telerik.com/reporting/designing-reports-considerations-csv
2022-01-17T00:34:44
CC-MAIN-2022-05
1642320300253.51
[]
docs.telerik.com
The literal for an array of primitive values This section states a sufficient subset of the rules that allow you to write a syntactically correct array literal that expresses any set of values, for arrays of any scalar data type, that you could want to create. The full set of rules allows more flexibility than do just those that are stated here. But because these are sufficient, the full, and rather complex, set is not documented here. The explanations in this section will certainly allow you to interpret the ::text typecast of any array value that you might see, for example in ysqlsh. Then Yugabyte's recommendation in this space is stated. And then the rules are illustrate with examples. Statement of the rules The statement of these rules depends on understanding the notion of the canonical form of a literal. Defining the "canonical form of a literal explained that the ::text typecast of any kind of array shows you that this form of the literal (more carefully stated, the text of this literal) can be used to recreate the value. In fact, this definition, and the property that the canonical form of the literal is sufficient to recreate the value, hold for values of all data types. Recall that every value within an array necessarily has the same data type. If you follow the rules that are stated here, and illustrated in the demonstrations below, you will always produce a syntactically valid literal which expresses the semantics that you intend. It turns out that many other variants, especially for text[] arrays, are legal and can produce the values that you intend. However, the rules that govern these exotic uses will not documented because it is always sufficient to create your literals in canonical form. Here is the sufficient set of rules. - The commas that delimit successive values, the curly braces that enclose the entire literal, and the inner curly braces that are used in the literals for multidimensional arrays, can be surrounded by arbitrary amounts of whitespace. If you want strictly to adhere to canonical form, then you ought not to do this. But doing so can improve the readability of ad hoc manually typed literals. It can also make it easier to read trace output in a program that constructs array literals programmatically. - In numeric and booleanarray literals, do not surround the individual values with double quotes. - In the literal for a timestamp[]array, do surround the individual values with double quotes—even though this is not strictly necessary. - In the literal for a text[]array, do surround every individual value with double quotes, even though this is not always necessary. It is necessary for any value that itself contains, as ordinary text, any whitespace or any of the characters that have syntactic significance within the outermost curly brace pair. This is the list: <space> { } , " \ - It's sufficient to write the curly braces and the comma ordinarily within the enclosing double quotes. But each of the double quote character and the backslash character must be escaped with an immediately preceding single backslash. Always write array literals in canonical form Bear in mind that you will rarely manually type literals in the way that this section does to demonstrate the rules. You'll do this only when teaching yourself, when prototyping new code, or when debugging. Rather, you'll typically create the literals programmatically—often in a client-side program that parses out the data values from, for example, an XML text file or, these days, probably a JSON text file. In these scenarios, the target array is likely to have the data type "some_user_defined_row_type[]". And when you create literals programmatically, you want to use the simplest rules that work and you have no need at all to omit arguably unnecessary double quote characters. Yugabyte recommends that the array literals that you generate programmatically are always spelled using the canonical representations You can relax this recommendation, to make tracing or debugging your code easier (as mentioned above), by using a newline between each successive encoded value in the array—at least when the values themselves use a lot of characters, as they might for "row" type values. Note: You can hope that the client side programming language that you use, together with the driver that you use to issue SQL to YugabyteDB and to retrieve results, will allow the direct use of data types that your language defines that map directly to the YSQL array and "row" type, just as they have scalar data types that map to int, text, timestamp, and boolean. For example Python has "list" that maps to array and "tuple" that maps to "row" type. And the "psycopg2" driver that you use for YugabyteDB can map values of these data types to, for example, a PREPARE statement like the one shown below. Note: YSQL has support for converting a JSON array (and this includes a JSON array of JSON objects) directly into the corresponding YSQL array values. The rules for constructing literals for arrays of "row" type values are described in literal for an array of "row" type values section. Your program will parse the input and create the required literals as ordinary text strings that you'll then provide as the actual argument to a PREPARE statement execution, leaving the typecast of the text actual argument, to the appropriate array data type, to the prepared INSERT or UPDATE statement like this: prepare stmt(text) as insert into t(rs) values($1::rt[]); Assume that in, this example, "rt" is some particular user-define "row" type. Examples to illustrate the rules Here are some examples of kinds array of primitive values: - array of numeric values (like intand numeric) - array of stringy values (like text, varchar, and char) - array of date-time values (like timestamp) - array of booleanvalues. In order to illustrate the rules that govern the construction of an array literal, it is sufficient to consider only these. You'll use the array[] constructor to create representative values of each kind and inspect its ::text typecast. One-dimensional array of int values This example demonstrates the principle: create table t(k serial primary key, v1 int[], v2 int[]); insert into t(v1) values (array[1, 2, 3]); select v1::text as text_typecast from t where k = 1 \gset result_ \echo :result_text_typecast The \gset metacommand was used first in this "Array data types and functionality" major section in array_agg() and unnest(). Notice that, in this example, the SELECT statement is terminated by the \gset metacommand on the next line rather than by the usual semicolon. The \gset metacommand is silent. The \echo metacommand shows this: {1,2,3} You can see the general form already: The (text of) an array literal starts with the left curly brace and ends with the right curly brace. The items within the braces are delimited by commas, and there is no space between one item, the comma, and the next item. Nor is there any space between the left curly brace and the first item or between the last item and the right curly brace. One-dimensional array of text values shows that more needs to be said. But the two rules that you've already noticed always hold. To use the literal that you produced to create a value, you must enquote it and typecast it. Do this with the \set metacommand: \set canonical_literal '\'':result_text_typecast'\'::int[]' \echo :canonical_literal . The \echo metacommand now shows this: '{1,2,3}'::int[] Next, use the canonical literal that was have produced As promised, the canonical form of the array literal does indeed recreate the identical value that the array[] constructor created. Note: Try this: select 12512454.872::text; The result is the canonical form, 12512454.872. So this (though you rarely see it): select 12512454.872::numeric; runs without error. Now try this: select to_number('12,512,454.872', '999G999G999D999999')::text; This, too, runs without error because it uses the to_number() built-in function. The result here, too, is the canonical form, 12512454.872—with no commas. Now try this: select '12,512,454.872'::numeric; This causes the "22P02: invalid input syntax for type numeric" error. In other words, only a numeric value in canonical form can be directly typecast using ::numeric. Here, using an array literal, is an informal first look at what follows. For now, take its syntax to mean what you'd intuitively expect. You must spell the representations for the values in a numeric[] array in canonical form. Try this: select ('{123.456, -456.789}'::numeric[])::text; It shows this: {123.456,-456.789} Now try this: select ('{9,123.456, -8,456.789}'::numeric[])::text; It silently produces this presumably unintended result (an array of four numeric values) because the commas are taken as delimiters and not as part of the representation of a single numeric value: {9,123.456,-8,456.789} In an array literal (or in a "row" type value literal), there is no way to accommodate forms that cannot be directly typecast. (The same holds for timestamp values as for numeric values.) YSQL inherits this limitation from PostgreSQL. It is the user's responsibility to work around this when preparing the literal because, of course, functions like "to_number()" cannot be used within literals. Functions can, however, be used in a value constructor as array[] value constructor shows. One-dimensional array of text values Use One-dimensional array of int values as a template for this and the subsequent sections. The example sets array values each of which, apart from the single character a, needs some discussion. These are the characters (or, in one case, character sequence), listed here "bare" and with ten spaces between each: a a b () , ' " \ create table t(k serial primary key, v1 text[], v2 text[]); insert into t(v1) values (array['a', 'a b', '()', ',', '{}', $$'$$, '"', '\']); select v1::text as text_typecast from t where k = 1 \gset result_ \echo :result_text_typecast For ordinary reasons, something special is needed to establish the single quote within the surrounding array literal which itself must be enquoted for using in SQL. Dollar quotes are a convenient choice. The \echo metacommand shows this: {a,"a b",(),",","{}",',"\"","\\"} This is rather hard (for the human) to parse. To make the rules easier to see, this list doesn't show the left and right curly braces. And the syntactically significant commas are surrounded with four spaces on each side: a , "a b" , () , "," , "{}" ' , "\"" , "\\" In addition to the first two rules, notice the following. - Double quotes are used to surround a value that includes any spaces. (Though the example doesn't show it, this applies to leading and trailing spaces too.) - The left and right parentheses are not surrounded with double quotes. Though these have syntactic significance in other parsing contexts, they are insignificant within the curly braces of an array literal. - The comma has been surrounded by double quotes. This is because it does have syntactic significance, as the value delimiter, within the curly braces of an array literal. - The curly braces have been surrounded by double quotes. This is because interior curly braces do have syntactic significance, as you'll see below, in the array literal for a multidimensional array. - The single quote is not surrounded with double quotes. Though it has syntactic significance in other parsing contexts, it is insignificant within the curly braces of an array literal. This holds, also, for all sorts of other punctuation characters like ;and :and [and ]and so on. - The double quote has been escaped with a single backslash and this has been then surrounded with double quotes. This is because it does have syntactic significance, as the (one and only) quoting mechanism, within the curly braces of an array literal. - The backslash has also been escaped with another single backslash and this has been then surrounded with double quotes. This is because it does have syntactic significance, as the escape character, within the curly braces of an array literal. There's another rule that the present example does not show. Though not every comma-separated value was surrounded by double quotes, it's never harmful to do this. You can confirm this with your own test, Yugabyte recommends that, for consistency, you always surround every text value within the curly braces for a text[] array literal with double quotes. To use the text of the literal that was produced above to recreate the value, you must enquote it and typecast it. Do this, as you did for the int[] example above, with the \set metacommand. But you must use dollar quotes because the literal itself has an interior single quote. \set canonical_literal '$$':result_text_typecast'$$'::text[] \echo :canonical_literal The \echo metacommand now shows this: $${a,"a b",(),",",',"\"","\\"}$$::text[] Next, use the canonical literal to update "t.v2" to confirm that the value that the row constructor created was recreated: update t set v2 = :canonical_literal where k = 1; select (v1 = v2)::text as "v1 = v2" from t where k = 1; Again, it shows this: v1 = v2 --------- true So, again as promised, the canonical form of the array literal does indeed recreate the identical value that the array[] constructor created. One-dimensional array of timestamp values This example demonstrates the principle: create table t(k serial primary key, v1 timestamp[], v2 timestamp[]); insert into t(v1) values (array[ '2019-01-27 11:48:33'::timestamp, '2020-03-30 14:19:21'::timestamp ]); select v1::text as text_typecast from t where k = 1 \gset result_ \echo :result_text_typecast The \echo metacommand shows this: {"2019-01-27 11:48:33","2020-03-30 14:19:21"} You learn one further rule from this: - The ::timestamptypecastable strings within the curly braces are tightly surrounded with double quotes. To use the text of the literal that was produced to create a value, you must enquote it and typecast it. Do this with the \set metacommand: \set canonical_literal '\'':result_text_typecast'\'::timestamp[]' \echo :canonical_literal . The \echo metacommand now shows this: '{"2019-01-27 11:48:33","2020-03-30 14:19:21"}'::timestamp[] Next, use the canonical literal Once again, as promised, the canonical form of the array literal does indeed recreate the identical value that the array[] constructor created. One-dimensional array of boolean values (and NULL in general) This example demonstrates the principle: create table t(k serial primary key, v1 boolean[], v2 boolean[]); insert into t(v1) values (array[ true, false, null ]); select v1::text as text_typecast from t where k = 1 \gset result_ \echo :result_text_typecast The \echo metacommand shows this: {t,f,NULL} You learn two further rules from this: - The canonical representations of TRUEand FALSEwithin the curly braces for a boolean[]array are tand f. They are not surrounded by double quotes. - To specify NULL, the canonical form uses upper case NULLand does not surround this with double quotes. Though the example doesn't show this, NULL is not case-sensitive. But to compose a literal that adheres to canonical form, you ought to spell it using upper case. And this is how you specify NULL within the array literal for any data type. (A different rule applies for fields within the literal for "row" type value). Note: If you surrounded NULL within a literal for a text[] array, then it would be silently interpreted as an ordinary text value that just happens to be spelled that way. To use the literal that that was produced to create a value, you must enquote it and typecast it. Do this with the \set metacommand: \set canonical_literal '\'':result_text_typecast'\'::boolean[]' \echo :canonical_literal . The \echo metacommand now shows this: '{t,f,NULL}'::boolean[] Next use the canonical literal to update "t.v2" to can confirm that the value that the row constructor created has been recreated : update t set v2 = :canonical_literal where k = 1; select (v1 = v2)::text as "v1 = v2" from t where k = 1; It shows this: v1 = v2 --------- true Yet again, as promised, the canonical form of the array literal does indeed recreate the identical value that the array[] constructor created. Multidimensional array of int values create table t(k serial primary key, v int[]); -- Insert a 1-dimensional int[] value. insert into t(v) values(' {1, 2} '::int[]); -- Insert a 2-dimensional int[] value. insert into t(v) values(' { {1, 2}, {3, 4} } '::int[]); -- Insert a 3-dimensional int[] value. insert into t(v) values(' { { {1, 2}, {3, 4} }, { {5, 6}, {7, 8} } } '::int[]); -- Insert a 3-dimensional int[] value, specifying -- the lower and upper bounds along each dimension. insert into t(v) values(' [3:4][5:6][7:8]= { { {1, 2}, {3, 4} }, { {5, 6}, {7, 8} } } '::int[]); select k, array_ndims(v) as "ndims", v::text as "v::text" from t order by k; Notice that the three different INSERT statements define arrays with different dimensionality, as the comments state. This illustrates what was explained in Synopsis: the column "t.v" can hold array values of any dimensionality. Here is the SELECT result: k | ndims | v::text ---+-------+----------------------------------------------- 1 | 1 | {1,2} 2 | 2 | {{1,2},{3,4}} 3 | 3 | {{{1,2},{3,4}},{{5,6},{7,8}}} 4 | 3 | [3:4][5:6][7:8]={{{1,2},{3,4}},{{5,6},{7,8}}} Again, whitespace in the inserted literals for numeric values is insignificant, and the text typecasts use whitespace (actually, the lack thereof) conventionally. The literal for a multidimensional array has nested {} pairs, according to the dimensionality, and the innermost pair contains the literals for the primitive values. Notice the spelling of the array literal for the row with "k = 4". The optional syntax [3:4][5:6][7:8] specifies the lower and upper bounds, respectively, for the first, the second, and the third dimension. This is the same syntax that you use to specify a slice of a array. (array slice operator) is described in its own section.) When the freedom to specify the bounds is not exercised, then they are assumed all to start at 1, and then the canonical form of the literal does not show the bounds. When the freedom is exercised, the bounds for every dimension must be specified. Specifying the bounds gives you, of course, an opportunity for error. If the length along each axis that you (implicitly) specify doesn't agree with the lengths that emerge from the actual values listed between the surrounding outer {} pair, then you get the "22P02 invalid_text_representation" error with this prose explanation: malformed array literal... Specified array dimensions do not match array contents.
https://docs.yugabyte.com/latest/api/ysql/datatypes/type_array/literals/array-of-primitive-values/
2022-01-17T01:46:27
CC-MAIN-2022-05
1642320300253.51
[]
docs.yugabyte.com
DRAFT: OpenAIRE Guidelines for Literature Repository Managers v4¶ These guidelines describe the application profile v4 for Literature Repository managers to be compatible with OpenAIRE. How to contribute¶ sending an e-mail to: [email protected]
https://openaire-guidelines-for-literature-repository-managers.readthedocs.io/en/v4.0.0/
2022-01-17T01:17:03
CC-MAIN-2022-05
1642320300253.51
[]
openaire-guidelines-for-literature-repository-managers.readthedocs.io
All disaster/easter egg vehicles are handled here. More... #include "stdafx.h" #include "aircraft.h" #include "disaster_vehicle.h" #include "industry.h" #include "station_base.h" #include "command_func.h" #include "news_func.h" #include "town.h" #include "company_func.h" #include "strings_func.h" #include "date_func.h" #include "viewport_func.h" #include "vehicle_func.h" #include "sound_func.h" #include "effectvehicle_func.h" #include "roadveh.h" #include "ai/ai.hpp" #include "game/game.hpp" #include "company_base.h" #include "core/random_func.hpp" #include "core/backup_type.hpp" #include "table/strings.h" #include "safeguards.h" Go to the source code of this file. All disaster/easter egg vehicles are handled here. The general flow of control for the disaster vehicles is as follows: Definition in file disaster_vehicle.cpp. Aircraft handling, v->current_order.dest states: 0: Fly towards the targeted industry 1: If within 15 tiles, fire away rockets and destroy industry 2: Industry explosions 3: Fly out of the map If the industry was removed in the meantime just fly to the end of the map. Definition at line 423 of file disaster_vehicle.cpp. References Vehicle::current_order, Order::GetDestination(), GetNewVehiclePos(), HasBit(), DisasterVehicle::image_override, Vehicle::tick_counter, DisasterVehicle::UpdatePosition(), and GetNewVehiclePosResult::y. Referenced by DisasterTick_Airplane(), and DisasterTick_Helicopter(). Airplane handling. Definition at line 485 of file disaster_vehicle.cpp. References DisasterTick_Aircraft(). (Big) Ufo handling, v->current_order.dest states: 0: Fly around to the middle of the map, then randomly for a while and home in on a piece of rail 1: Land there and breakdown all trains in a radius of 12 tiles; and now we wait... because as soon as the Ufo lands, a fighter jet, a Skyranger, is called to clear up the mess Definition at line 516 of file disaster_vehicle.cpp. References Vehicle::current_order, Order::GetDestination(), and Vehicle::tick_counter. Helicopter handling. Definition at line 491 of file disaster_vehicle.cpp. References DisasterTick_Aircraft(). Marks all disasters targeting this industry in such a way they won't call Industry::Get(v->dest_tile) on invalid industry anymore. Definition at line 943 of file disaster_vehicle.cpp. References Vehicle::current_order, Vehicle::dest_tile, FOR_ALL_DISASTERVEHICLES, Order::GetDestination(), Order::SetDestination(), ST_AIRPLANE, ST_HELICOPTER, and Vehicle::subtype. Notify disasters that we are about to delete a vehicle. So make them head elsewhere. Definition at line 959 of file disaster_vehicle.cpp. References Vehicle::age, Vehicle::current_order, Vehicle::dest_tile, FOR_ALL_DISASTERVEHICLES, GetAircraftFlightLevelBounds(), Order::GetDestination(), RandomTile, Order::SetDestination(), ST_SMALL_UFO, Vehicle::subtype, and Vehicle::z_pos. Delay counter for considering the next disaster. Definition at line 56 of file disaster_vehicle.cpp. Definition at line 101 of file disaster_vehicle.cpp. Definition at line 893 of file disaster_vehicle.cpp. Definition at line 682 of file disaster_vehicle.cpp.
http://docs.openttd.org/disaster__vehicle_8cpp.html
2019-05-19T08:47:12
CC-MAIN-2019-22
1558232254731.5
[]
docs.openttd.org
CALCULATETABLE Evaluates a table expression in a context modified by the given filters. Syntax CALCULATETABLE(<expression>,<filter1>,<filter2>,…) Parameters The expression used as the first parameter must be a function that returns a table. calculates a scalar value. Return value A table of values.. Example The following example uses the CALCULATETABLE function to get the sum of Internet sales for 2006. This value is later used to calculate the ratio of Internet sales compared to all sales for the year 2006. The following table shows the results from the following formula. =SUMX( CALCULATETABLE('InternetSales_USD', 'DateTime'[CalendarYear]=2006) , [SalesAmount_USD]) See also RELATEDTABLE function (DAX) Filter functions (DAX)
https://docs.microsoft.com/en-us/dax/calculatetable-function-dax
2019-05-19T09:18:10
CC-MAIN-2019-22
1558232254731.5
[]
docs.microsoft.com
All content with label concurrency+consistent_hash+hot_rod+infinispan+jboss_cache+listener+read_committed+release+scala. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, rehash, replication, transactionmanager, partitioning, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, schema, state_transfer, cache, amazon, s3, grid, memcached, test, api, xsd, ehcache, maven, documentation, write_behind, 缓存, ec2, hibernate, aws, interface, custom_interceptor, setup, clustering, eviction, gridfs, out_of_memory, import, index, events, configuration, hash_function, batch, buddy_replication, loader, xa, write_through, cloud, remoting, mvcc, notification, tutorial, murmurhash2, xml, jbosscache3x, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, transaction, async, interactive, xaresource, build, installation, cache_server, client, non-blocking, migration, rebalance, jpa, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, murmurhash, standalone, snapshot, hotrod, repeatable_read, webdav, docs, batching, store, jta, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, rest more » ( - concurrency, - consistent_hash, - hot_rod, - infinispan, - jboss_cache, - listener, - read_committed, - release, - scala ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/concurrency+consistent_hash+hot_rod+infinispan+jboss_cache+listener+read_committed+release+scala
2019-05-19T09:06:36
CC-MAIN-2019-22
1558232254731.5
[]
docs.jboss.org
All content with label development+gridfs+infinispan+installation+jcache+jsr-107+repeatable_read+resteasy. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, schema, listener, cache, amazon, s3,, started, cachestore, oauth, data_grid, cacheloader, hibernate_search, cluster, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, scala, client, non-blocking, migration, jpa, filesystem, tx, oauth_saml, gui_demo, eventing, client_server, infinispan_user_guide, standalone, webdav, hotrod, snapshot, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jgroups, locking, rest, hot_rod more » ( - development, - gridfs, - infinispan, - installation, - jcache, - jsr-107, - repeatable_read, - resteasy ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/development+gridfs+infinispan+installation+jcache+jsr-107+repeatable_read+resteasy
2019-05-19T10:00:43
CC-MAIN-2019-22
1558232254731.5
[]
docs.jboss.org
Managing Sessions in Your Application (Android Only). (For iOS applications, the AWS Mobile SDK for iOS automatically reports session information.) Before You Begin If you haven't already, do the following: Integrate the AWS Mobile SDK for Android with your app. See Integrating the AWS Mobile SDKs for Android or iOS. Update your application to register endpoints. See Registering Endpoints in Your Application.); } } Next Step You've updated your Android app to report session information. Now, when users open and close your app, you can see session metrics in the Amazon Pinpoint console, including those shown by the Sessions and Session heat map charts. Next, update your app to report usage data. See Reporting Events in Your Application.
https://docs.aws.amazon.com/pinpoint/latest/developerguide/integrate-sessions-android.html
2018-04-19T11:52:58
CC-MAIN-2018-17
1524125936914.5
[]
docs.aws.amazon.com
The hbase.bucketcache.percentage.in.combinedcache is removed in HDP 2.6.0. This simplifies the configuration of block cache. BucketCache configurations from HDP 2 If bulk load support for backup/restore is required, follow these steps: org.apache.hadoop.hbase.backup.BackupHFileCleanerthrough hbase.master.hfilecleaner.plugins. org.apache.hadoop.hbase.backup.BackupHFileCleaneris responsible for keeping bulk loaded hfiles so that incremental backup can pick them up. org.apache.hadoop.hbase.backup.BackupObserverthrough hbase.coprocessor.region.classes. org.apache.hadoop.hbase.backup.BackupObserveris notified when bulk load completes and writes records into hbase:backup table.
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_command-line-upgrade/content/start-hbase-25.html
2018-04-19T11:32:45
CC-MAIN-2018-17
1524125936914.5
[]
docs.hortonworks.com
40.1. sunaudiodev — Access to Sun audio hardware¶ Deprecated since version 2.6: The sunaudiodev module has been removed in Python: - exception sunaudiodev. error¶ This exception is raised on all errors. The argument is a string describing what went wrong. sunaudiodev. open(mode)¶for the base audio device filename. If not found, it falls back to /dev/audio. The control device is calculated by appending “ctl” to the base audio device. 40.1.1. Audio Device Objects¶ The audio device objects are returned by open() define the following methods (except control objects which only provide getinfo(), setinfo(), fileno(), and drain()): audio device.close() This method explicitly closes the device. It is useful in situations where deleting the object does not immediately close it since there are other references to it. A closed device should not be used again. audio device.fileno() Returns the file descriptor associated with the device. This can be used to set up SIGPOLLnotification, as described below. audio device.drain() This method waits until all pending output is processed and then returns. Calling this method is often not necessary: destroying the object will automatically close the audio device and this will do an implicit drain. audio device.flush() This method discards all pending output. It can be used avoid the slow response to a user’s stop request (due to buffering of up to one second of sound). audio device.getinfo() playsubstructure have o_prepended to their name and members of the recordstructure have i_. So, the C member play.sample_rateis accessed as o_sample_rate, record.gainas i_gainand monitor_gainplainly as monitor_gain. audio device.ibufcount() This method returns the number of samples that are buffered on the recording side, i.e. the program will not block on a read()call of so many samples. audio device.obufcount() This method returns the number of samples buffered on the playback side. Unfortunately, this number cannot be used to determine a number of samples that can be written without blocking since the kernel output queue length seems to be variable. audio device.read(size) This method reads size samples from the audio input and returns them as a Python string. The function blocks until enough data is available. audio device.setinfo(status) This method sets the audio device status parameters. The status parameter is a device status object as returned by getinfo()and possibly modified by the program. audio device.write(samples)) 40.2. SUNAUDIODEV — Constants used with sunaudiodev¶ Deprecated since version 2.6: The SUNAUDIODEV module has been removed in Python 3. This is a companion module to sunaudiodev which defines useful symbolic constants like MIN_GAIN, MAX_GAIN, SPEAKER, etc. The names of the constants are the same names as used in the C include file <sun/audioio.h>, with the leading string AUDIO_ stripped.
https://docs.python.org/2/library/sunaudio.html
2016-08-31T14:17:30
CC-MAIN-2016-36
1471982290634.12
[]
docs.python.org
A newer version of this software is available You are viewing the documentation for an older version of this software. To find the documentation for the current version, visit the Couchbase documentation home page. If you are reading this then you have just downloaded the Couchbase Sqoop plugin. This plugin allows you to connect to Couchbase Server 2.0 or higher or Membase Server 1.7.1+ and stream keys into HDFS or Hive for processing with Hadoop. Note that in this document we will refer to our database as Couchbase, but if you are using Membase everything will still work correctly. If you have used Sqoop before for doing imports and exports from other databases then using this plugin should be straightforward since it uses a similar command line argument structure. The installation process for the Couchbase Sqoop plugin is simple. When you download the plugin from Cloudera you should find a set of files that need to be moved into you Sqoop installation. These files along with a short description of why they are needed are listed below. couchbase-hadoop-plugin-1.0.jar — This is the jar file that contains all of the source code that makes Sqoop read data from Couchbase. couchbase-config.xml — This is a property file used to register a ManagerFactory for the Couchbase plugin with Sqoop. couchbase-manager.xml — This property file tells Sqoop what jar the ManagerFactory defined in couchsqoop-config.xml resides. spymemcached-2.8-preview3.jar — This is the client jar used by our plugin to read and write data from Couchbase. jettison-1.1.jar - This is a dependency of memcached-2.7.jar. netty-3.1.5GA.jar - This is a dependency of memcached-2.7.jar. install.sh — A script to automatically install the Couchbase plugin files to Sqoop. Automatic installation is done through the use of the install.sh script that comes with the plugin download. The script takes one argument, the path to your Sqoop installation. Below is an example of how to use the script. shell> ./install.sh path_to_sqoop_home Manual installation of the Couchbase plugin requires copying the files downloaded from Cloudera into your Sqoop installation. Below are a list of files that contained in the plugin and the name of the directory in your Sqoop installation to copy each file to. couchbase-hadoop-plugin-1.0.jar — lib spymemcached-2.8.jar — lib jettison-1.1.jar — lib netty-3.1.5GA.jar — lib couchbase-config.xml — conf couchbase-manager.xml — conf/managers.d Uninstallation of the plugin requires removal of all of the files that were added to Sqoop during installation. To do this cd into your Sqoop home directory and execute the following command: shell> rm lib/couchbase-hadoop-plugin-1.0.jar lib/spymemcached-2.8.jar \ lib/jettison-1.1.jar lib/netty-3.1.5GA.jar \ conf/couchbase-config.xml conf/managers.d/couchbase-manager.xml The Couchbase Sqoop Plugin can be used with a variety of command line tools that are provided by Sqoop. In this section we discuss the usage of each tool. Since Sqoop is built for a relational model it requires that the user specifies a table to import and export into Couchbase. The Couchbase plugin uses the --table option to specify the type of tap stream for importing and exporting into Couchbase. For exports the user must enter a value for the --table option even though what is entered will not actually be used by the plugin. For imports the table command can take on only two values. DUMP — Causes all keys currently in Couchbase to be read into HDFS. BACKFILL_## — Streams all key mutations for a given amount of time (in minutes). For the --table value for the BACKFILL table that a time should be put in place of the brackets. For example BACKFILL_5 means stream key mutations in the Couchbase server for 5 minutes and then stop the stream. For exports a value for --table is required, but the value will not be used. Any value used for the --table option when doing export will be ignored by the Couchbase plugin. A connect string option is required in order to connect to Couchbase. This can be specified with --connect on the command line. Below are two examples of connect strings. When creating your connect strings simply replace the IP address above with the IP address of your Couchbase sever. If you have multiple servers you can list them in a comma-separated list. Why list multiple servers? Let’s say you create a backfill stream for 10,080 minutes or one week. In that time period you might have a server crash, have to add another server, or remove a server from your cluster. Providing an address to each server allows an import and export command to proceed through topology changes to your cluster. In the first example above if you had a two-node cluster and 10.2.1.55 goes down then the import will fail even though the entire cluster didn’t go down. If you list both machines then the import will continue unaffected by the downed server and your import will complete successfully. By. Importing data to your cluster requires the use of the Sqoop import command followed by the parameters --connect and --table. Below are some example imports. shell> bin/sqoop import --connect --table DUMP This will dump all key-value pairs from Couchbase into HDFS. shell> bin/sqoop import --connect --table BACKFILL_10 This will stream all key-value mutations from Couchbase into HDFS. Sqoop provides many more options to the import command than we will cover in this document. Run bin/sqoop import help for a list of all options and see the Sqoop documentation for more details about these options. Exporting data to your cluster requires the use of the Sqoop import command followed by the parameters --connect, --export-dir, and --table. Below are some example imports. shell> bin/sqoop export --connect --table garbage_value --export-dir dump_4-12-11 This will export all key-value pairs from the HDFS directory specified by export-dir into Couchbase. shell> bin/sqoop export -connect --table garbage_value --export-dir backfill_4-29-11 This will export all key-value pairs from the HDFS directory specified by --export-dir into Couchbase. Sqoop provides many more options to the export command than we will cover in this document. Run bin/sqoop export help for a list of all options and see the Sqoop documentation for more details about these options. Sqoop has a tool called list tables that in a relational database has a lot of meaning since it shows us what kinds of things we can import. As noted in previous sections, Couchbase doesn’t have a notion of tables, but we use DUMP and BACKFILL_## as values to the --table option. As a result using the list-tables tool does the following. shell> bin/sqoop list-tables --connect DUMP BACKFILL_5 All this does in the case of the Couchbase plugin is remind us what we can use as an argument to the --table option. We give BACKFILL a time of 5 minutes so that the import-all-tables tool functions properly. Sqoop provides many more options to the list-tables command than we will cover in this document. Run bin/sqoop list-tables help for a list of all options and see the Sqoop documentation for more details about these options.. While Couchbase provides many great features to import and export data from Couchbase to Hadoop there is some functionality that the plugin doesn’t implement in Sqoop. Here’s a list of what isn’t implemented. Querying: You cannot run queries on Couchbase. All tools that attempt to do this will fail with a NotSupportedException. list-databases tool: Even though Couchbase is a multi-tenant system that allows for multiple databases. There is no way of listing these databases from Sqoop. eval-sql tool: Couchbase doesn’t use SQL so this tool will not work. The Couchbase plugin consists of two parts. The first part is the addition of code that allows the mappers in Hadoop to read the values sent to it from Couchbase. The second part is the use of the Spymemcached client to get data to and from Couchbase. For imports the plugin uses the tap stream feature in Spymemcached. Tap streams allow users to stream large volumes of data from Couchbase into other applications and are also at the heart of replication in Couchbase. They enable a fast way to move data from Couchbase to Hadoop for further processing. Getting data back into Couchbase runs through the front end of Couchbase using the memcached protocol. For more information about the internals of Sqoop see the Sqoop documentation.
http://docs.couchbase.com/hadoop-plugin-1.1/
2016-08-31T14:14:48
CC-MAIN-2016-36
1471982290634.12
[]
docs.couchbase.com
In this exercise, you will import an aerial image from Google Earth into AutoCAD Civil 3D. To provide a useful source image, Google Earth must display the image from directly above. If the image is tilted, it will still be captured by AutoCAD Civil 3D. However, it will not be a useful representation of the surface when it is converted to a render material. When the Google Earth image is imported into AutoCAD Civil 3D, it appears in the drawing as a gray scale image object. The image is scaled by both the linear units in the drawing and the extents of the latitude/longitude of the image. The image aspect ratio matches that of the image displayed in the Google Earth window. AutoCAD Civil 3D automatically generates a name for the image, using the first three letters of the drawing file name and a unique ID number. The image is saved in the same directory as the drawing file. When the image is draped on the surface, a new render material is created from the image, and the render material is applied to the surface. If the image is larger than the surface, the image is clipped to the extents of the surface object. If multiple, smaller images are needed to cover a surface, you must combine them into a single image. For more information, see the AutoCAD Civil 3D Help topic Importing a Google Earth Image to AutoCAD Civil 3D. This exercise continues from Exercise 1: Publishing Surface Data to Google Earth. Import a Google Earth image If the Google Earth navigation controls are not visible, hover the cursor over the compass at the upper right-hand corner of the screen: Compass Navigation Controls The gray scale Google Earth image appears in the drawing window in the appropriate location under the surface object. The image was placed on the current layer, which is named Image in the current drawing. To continue this tutorial, go to Exercise 3: Draping an Image on a Surface.
http://docs.autodesk.com/CIVIL/2010/ENU/AutoCAD%20Civil%202010%20User%20Documentation/files/WS1a9193826455f5ff13d8321148f78dfd9-4ca0-process.htm
2016-08-31T14:16:24
CC-MAIN-2016-36
1471982290634.12
[]
docs.autodesk.com
Deciding to file for bankruptcy is a one of the most important financial decisions you can make. The purpose of this guide is to walk the user step by step through the process and shed light on what can be a very confusing and complex process. While a lawyer is not necessary to file bankruptcy, it can be a challenge for the layperson. This guide was created to inform the user, but is not meant to provide legal advice. This guide will address many of the frequently asked questions that arise in a chapter 12 filing, as well as discuss the rules, filing fees, and timelines required. Lastly, it will outline a step by step course of action preparing your own Chapter 12 filing. Get Unlimited Access to Our Complete Business Library Plus Would you be interested in taking a longer survey for a chance to win a 1-month free subscription to Docstoc Premium?
http://premium.docstoc.com/docs/23716791/Bankruptcy-Frequently-Asked-Questions-Chapter-12
2013-05-18T17:48:08
CC-MAIN-2013-20
1368696382584
[]
premium.docstoc.com
- - - and Citrix Virtual Apps Use of this version of Profile Management on Citrix Virtual Apps servers is subject to the Profile Management EULA. You can also install Profile Management on local desktops, allowing users to share their local profile with published resources. Note: Profile Management automatically configures itself in Citrix Virtual Desktops but not Citrix Virtual Apps environments. Use Group Policy or the .ini file to adjust Profile Management settings for your Citrix Virtual Apps deployment. Profile Management works in Citrix Virtual Apps environments that employ Remote Desktop Services (formerly known as Terminal Services). In these environments, you must set up an OU for each supported operating system. For more information, see your Microsoft documentation. In farms that contain different versions of Citrix Virtual Apps or that run different operating systems, Citrix recommends using a separate OU for each server that runs each version or operating system. Important: Including and excluding folders that are shared by multiple users (for example, folders containing shared application data published with Citrix Virtual Apps) is not supported. Streamed applications Profile Management can be used in environments where applications are streamed to either user devices directly or streamed to Citrix Virtual Apps servers and, from there, published to users. Client-side application virtualization technology in Citrix Virtual Apps is based on application streaming which automatically isolates the application. The application streaming feature enables applications to be delivered to Citrix Virtual Apps servers and client devices, and run in a protected virtual environment. There are many reasons to isolate the applications that are being streamed to users, such as the ability to control how applications interact on the user device to prevent application conflicts. For example, isolation of user settings is required if different versions of the same application are present. Microsoft Office 2003 might be installed locally and Office 2007 might be streamed to users’ devices. Failure to isolate user settings creates conflicts, and might severely affect the functionality of both applications (local and streamed). For requirements relating to the use of Profile Management with streamed applications, see System requirements..
https://docs.citrix.com/en-us/profile-management/current-release/integrate/xenapp.html
2021-06-12T20:33:00
CC-MAIN-2021-25
1623487586390.4
[]
docs.citrix.com
Libvirt virt driver OS distribution support matrix¶ This page documents the libvirt versions present in the various distro versions that OpenStack Nova aims to be deployable with. Note This document was previously hosted on the OpenStack wiki: Libvirt minimum version change policy¶ At the start of each Nova development cycle this matrix will be consulted to determine if it is viable to drop support for any end-of-life or otherwise undesired distro versions. Based on this distro evaluation, it may be possible to increase the minimum required version of libvirt in Nova, and thus drop some compatibility code for older versions. When a decision to update the minimum required libvirt version is made, there must be a warning issued for one cycle. This is achieved by editing nova/virt/libvirt/driver.py to set NEXT_MIN_LIBVIRT_VESION. For example: NEXT_MIN_LIBVIRT_VERSION = (X, Y, Z) This causes a deprecation warning to be emitted when Nova starts up warning the admin that the version of libvirt in use on the host will no longer be supported in the subsequent release. After a version has been listed in NEXT_MIN_LIBVIRT_VERSION for one release cycle, the corresponding actual minimum required libvirt can be updated by setting MIN_LIBVIRT_VERSION = (X, Y, Z) At this point of course, an even newer version might be set in NEXT_MIN_LIBVIRT_VERSION to repeat the process…. An email should also be sent at this point to the [email protected] mailing list as a courtesy raising awareness of the change in minimum version requirements in the upcoming release, for example: There is more background on the rationale used for picking minimum versions in the operators mailing list thread here: QEMU minimum version change policy¶ After choosing a minimum libvirt version, the minimum QEMU version is determined by looking for the lowest QEMU version from all the distros that support the decided libvirt version. OS distribution versions¶ that can satisfy the minimum required software versions. This table merely aims to help identify when minimum required versions can be reasonably updated without losing support for important OS distros.
https://docs.openstack.org/nova/latest/reference/libvirt-distro-support-matrix.html
2021-06-12T21:33:00
CC-MAIN-2021-25
1623487586390.4
[]
docs.openstack.org
publishing-api: Publishing API's Model Contents - Introduction - Content - Linking - History Introduction This document serves as a broad introduction to the domain models used in the Publishing API and their respective purposes. They can be separated into 3 areas of concern: - Content - Content that is stored in the Publishing API. - Linking - Links between content that is stored. - History - The storing of operations that may have altered content or links. These areas are all interconnected through the use of shared content_id fields. content_id content_id is a UUID value that is used to identify distinct pieces of content that are used on GOV.UK. It is generated from within a publishing application and the same content_id is used for content that is available in multiple translations. Different iterations of the same piece of content all share the same content_id. Each piece of content stored in the Publishing API is associated with a content_id, the links stored are relationships between content_ids, and history is associated with a content_id. Diagram The following is a high-level diagram that was generated with plantuml. The source that generated this diagram is checked into this repository. Content Document A document represents all iterations of a piece of content in a particular locale. It is associated with multiple editions that represent distinct versions of a piece of content. The concerns of a document are which iterations are represented on draft and live content stores; and the lock version for the content. A document stores the content_id, locale and lock version for content. It is designed to be a simple model so that it can be used for database level locking of concurrent requests. Edition An edition is a particular iteration of a piece of content. It stores most of the data that is used to represent content in the content store and is associated with a document. There are uniqueness constraints to ensure there are not conflicting Editions. Previously an Edition was named ContentItem. Most of the fields stored on an edition are defined as part of the /put-content/:content_id API. Key fields that are set internally by the Publishing API are: state- where an edition is in its publishing workflow, can be "draft", "published", "unpublished" or "superseded". user_facing_version- an integer that stores which iteration of a document an edition is. content_store- indicates whether an edition is intended for draft, live or no content store. Documents that have an edition with a "live" content_store value will have the corresponding edition presented on the live content store. All documents where there is an edition with a "draft" or "live" value of content_store are presented on the draft content store. With the draft edition presented if available, otherwise the live one. Workflow An edition can be in one of four states: "draft", "published", "unpublished" and "superseded". At any one time a document can contain: - 1 edition in a "draft" state - 1 edition in a "published" or "unpublished" state - any number of editions in a "superseded" state When the first edition of a document is created it is in a "draft" state and available on the draft content store. The content can be updated any number of times before publishing. Once an edition has been published it is possible to create a new edition of the draft - thereby having 1 draft edition and 1 published edition of a document. A published edition can be unpublished, which will create an unpublishing for the edition. The unpublished edition will be represented on the live content store. If a draft is published while there is already a published or unpublished edition. The previous edition will have its state updated to "superseded" and will be replaced on the live content store with the newly published edition. Uniqueness There are uniqueness constraints to ensure conflicting editions cannot be stored: - No two editions can share the same base_pathand content_storevalues. This ensures there can't be multiple documents that are trying to use the same path on GOV.UK. - For a document there can't be two editions with the same user_facing_version. This prevents there being two editions sharing the same version number. - For a document there can't be two editions on the same content store. This prevents an edition being accidentally available in multiple versions in multiple places. Substitution When creating and publishing editions an existing edition with the same base_path will be blocked due to uniqueness constraints. However when one of the items that conflicts is considered substitutable (typically a non-content type) the operation can continue and the blocking item will be discarded, in the case of a draft; or unpublished if it is published. Unpublishing When an edition is unpublished an Unpublishing model is used to represent the type of unpublishing and associated meta data so that the unpublished edition can be represented correctly in the content store. There are 5 types an unpublishing can be: withdrawal- The edition will still be readable on GOV.UK but will have a withdrawn banner, provided with an explanationand an optional alternative_path. redirect- Attempts to access the edition on GOV.UK will be redirected to according to the redirectshash, or a provided alternative_path gone- Attempts to access the edition on GOV.UK will receive a 410 Gone HTTP response. vanish- Attempts to access the edition on GOV.UK will receive a 404 Not Found HTTP response. substitute- This type cannot be set by a user and is automatically created when an edition is substituted. ChangeNote An Edition can be associated with a ChangeNote, which stores a note describing the changes that have occurred between major editions of a Document and the time the changes occurred. When presenting an edition of a Document to the content store, the change notes for that edition and all previous editions are combined to create a list of the change notes for the document. AccessLimit AccessLimit is a concept that is associated with an Edition in a "draft" state. It is used to store a list of user id's (UIDs that represent users in signon) which will be the only users who can view the Edition in the draft environment. PathReservation A PathReservation is a model that associates a path (in the URI context of<path>) with a publishing application. This model is used to restrict the usage of paths to a particular publishing application. These are created when content is created or moved, and can be created before content exists to ensure that no other app can use the path. Linking Associations between content in the Publishing API is stored through Links, these are used to indicate a relationship with the documents of one content_id with the documents of a different content_id. LinkSet A LinkSet is a model that is used to represent the association of a content_id and a collection of Links. It stores a lock version number for usage in optimist locking. Link A Link represents the association to another content_id - known as the target_content_id. A link_type and ordering is also stored on a Link. link_type is used to represent the relationship between the content of the content_id. It is common for a link to have multiple relationships to content of the same link_type, the ordering field is used to store the order in which the links of a certain link_type was specified. The source of a link can either be a LinkSet (i.e. a content_id), known as link set links or an Edition, known as edition links. History The Publishing API stores information on operations that change the state of data stored in the Publishing API. These are stored through the Event and Action models. Event An Event is used to store the details of data that may change state within the Publishing API. It stores data that identifies the end user and web request that initiated the operation; which operation and which content will be affected; and the payload of the input. Only operations that successfully complete are stored as Events. Events are used as a debugging and reference tool by developers of the Publishing API. As they generate large amounts of data the full details of them are not stored permanently. Events older then a month are archived to S3, you can import these events back into your local DB by running the rake tasks in lib/tasks/events.rake, after you set up the relevant ENV variables. For example if you want to find all the events that are relevant for a particular content id you can run: rake 'events:import_content_id_events[a796ca43-021b-4960-9c99-f41bb8ef2266]' Action An Action is used to store the change history of a piece of content in the Publishing API. They are associated with both a content_id and an Edition. Requests that change the state in the Publishing API create Actions that store which action was performed and the end user who initiated the request. Actions can be created by publishing applications to store additional data on the workflow of content.
https://docs.publishing.service.gov.uk/apps/publishing-api/model.html
2021-06-12T20:54:46
CC-MAIN-2021-25
1623487586390.4
[array(['https://raw.githubusercontent.com/alphagov/publishing-api/main/docs/model/object-model.png', 'Diagram of the object model'], dtype=object) ]
docs.publishing.service.gov.uk
An endpoint protection application that you might have on endpoints is the Microsoft Enhanced Mitigation Experience Toolkit (EMET). EMET is designed to detect and protect against common attack techniques and actions. Integrating EMET into the Carbon Black EDR environment gives you a single place to go investigate attacks detected and stopped by EMET, while taking advantage of the additional visibility provided by Carbon Black EDR. When EMET events become part of the Carbon Black EDR database, you can search for them, use them to trigger alerts, and perform process analysis to understand the relationships between an EMET event and other events on one or more endpoints in your organization, including the timeline of those relationships. In addition, EMET events can become part of the syslog output from the Carbon Black EDR server. - This documentation uses the term “EMET event” to indicate the case where EMET detects an exploit attempt. It uses the term “EMET configuration” to indicate the protections enabled by EMET for that process. - EMET features and terminology are not detailed in this document. If you need more information about EMET, see the documentation provided by Microsoft. - Proper functioning of this integration assumes EMET is installed and configured per Microsoft recommendations. EMET 5.x versions should be compatible. - Reporting of EMET events to Carbon Black EDR requires that the Windows Event Log be selected as one of the Reporting options in the EMET interface on the host. By default, EMET-enabled sensors report EMET events and configuration to the Carbon Black EDR server. The integration does not require interaction with another server. For any reporting sensor, this information appears in several places in the Carbon Black EDR console: - On the Search Processes page , you can search for processes for which an EMET mitigation occurred and/or processes whose sensor has EMET protection enabled. See “Process Search and Analysis” in the VMware Carbon Black EDR User Guide. - On the Process Analysis page, EMET events are displayed and labeled in the table of events, and the EMET settings specific to the current process on the reporting sensor are included. See “Process Search and Analysis” in the VMware Carbon Black EDR User Guide. - On the Sensor Details page, the general EMET configuration (if any) is shown. See “Managing Sensors” in the VMware Carbon Black EDR User Guide. To further enhance the EMET integration, you can enable the EMET Protection Feed on the Threat Intelligence Feeds page (see “Threat Intelligence Feeds” in the VMware Carbon Black EDR User Guide ). This does not actually enable/disable delivery of events from sensors, but it does enable you to: - Create alerts based on EMET events and manage them on the Triage Alerts page. - Specify delivery of an email alert when an EMET event occurs. - Include EMET events in the syslog output from your Carbon Black EDR server.
https://docs.vmware.com/en/VMware-Carbon-Black-EDR/7.5/cb-edr-integration-guide/GUID-C26158A5-23C0-445A-923D-C6C6E3EBBEC6.html
2021-06-12T21:06:55
CC-MAIN-2021-25
1623487586390.4
[]
docs.vmware.com