content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
cameras
Classes:
Functions:
- OPENCV_LAST_INIT_TIME = <Synchronized wrapper for c_double(0.0)>
Time the last OpenCV camera was initialized (seconds, from time.time()).
v4l2 has an extraordinarily obnoxious …feature – if you try to initialize two cameras at ~the same time, you will get a neverending stream of informative error messages:
VIDIOC_QBUF: Invalid argument
The workaround seems to be relatively simple, we just wait ~2 seconds if another camera was just initialized.
- class Camera(fps=None, timed=False, crop=None, rotate: int = 0, **kwargs)[source]
Bases:
autopilot.hardware.Hardware
Metaclass for Camera objects. Should not be instantiated on its own.
- Parameters
fps (int) – Framerate of video capture
timed (bool, int, float) – If False (default), camera captures indefinitely. If int or float, captures for this many seconds
rotate (int) – Number of times to rotate image clockwise (default 0). Note that image rotation should happen in
_grab()or be otherwise implemented in each camera subclass, because it’s a common enough operation many cameras have some optimized way of doing it.
**kwargs –
Arguments to
stream(),
write(), and
queue()can be passed as dictionaries, eg.:
stream={'to':'T', 'ip':'localhost'}
When the camera is instantiated and
capture()is called, the class uses a series of methods that should be overwritten in subclasses. Further details for each can be found in the relevant method documentation.
It is highly recommended to instantiate Cameras with a
Hardware.name, as it is used in
output_filenameand to identify the network stream
Three methods are required to be overwritten by all subclasses:
init_cam()- required - used by
cam, instantiating the camera object so that it can be queried and configured
_grab()- required - grab a frame from the
cam
_timestamp()- required - get a timestamp for the frame
The other methods are optional and depend on the particular camera:
capture_init()- optional - any required routine to prepare the camera after it is instantiated but before it begins to capture
_process()- optional - the wrapper around a full acquisition cycle, including streaming, writing, and queueing frames
_write_frame()- optional - how to write an individual frame to disk
_write_deinit()- optional - any required routine to finish writing to disk after acquisition
capture_deinit()- optional - any required routine to stop acquisition but not release the camera instance.
- Variables
frame (tuple) – The current captured frame as a tuple (timestamp, frame).
shape (tuple) – Shape of captured frames (height, width, channels)
blosc (bool) – If True (default), use blosc compression when
cam – The object used to interact with the camera
fps (int) – Framerate of video capture
timed (bool, int, float) – If False (default), camera captures indefinitely. If int or float, captures for this many seconds
q (Queue) – Queue that allows frames to be pulled by other objects
queue_size (int) – How many frames should be buffered in the queue.
initialized (threading.Event) – Called in
init_cam()to indicate the camera has been initialized
stopping (threading.Event) – Called to signal that capturing should stop. when set, ends the threaded capture loop
capturing (threading.Event) – Set when camera is actively capturing
streaming (threading.Event) – Set to indicate that the camera is streaming data over the network
writing (threading.Event) – Set to indicate that the camera is writing video locally
queueing (threading.Event) – Indicates whether frames are being put into
q
indicating (threading.Event) – Set to indicate that capture progress is being indicated in stdout by
tqdm
- Parameters
fps
timed
crop (tuple) – (x, y of top left corner, width, height)
**kwargs
Attributes:
Methods:
- capture(timed=None)[source]
Spawn a thread to begin capturing.
- Parameters
timed (None, int, float) – if None, record according to
timed(default). If numeric, record for
timedseconds.
- stream(to='T', ip=None, port=None, min_size=5, **kwargs)[source]
Enable streaming frames on capture.
Spawns a
Net_Nodewith
Hardware.init_networking(), and creates a streaming queue with
Net_Node.get_stream()according to args.
Sets
Camera.streaming
- Parameters
to (str) – ID of the recipient. Default ‘T’ for Terminal.
ip (str) – IP of recipient. If None (default), ‘localhost’. If None and
tois ‘T’,
prefs.get('TERMINALIP')
port (int, str) – Port of recipient socket. If None (default),
prefs.get('MSGPORT'). If None and
tois ‘T’,
prefs.get('TERMINALPORT').
min_size (int) – Number of frames to collect before sending (default: 5). use 1 to send frames as soon as they are available, sacrificing the efficiency from compressing multiple frames together
**kwargs – passed to
Hardware.init_networking()and thus to
Net_Node
- l_start(val)[source]
Begin capturing by calling
Camera.capture()
- Parameters
val – unused
- l_stop(val)[source]
Stop capture by calling
Camera.release()
- Parameters
val – unused
- write(output_filename=None, timestamps=True, blosc=True)[source]
Enable writing frames locally on capture
Spawns a
Video_Writerto encode video, sets
writing
- Parameters
output_filename (str) – path and filename of the output video. extension should be
.mp4, as videos are encoded with libx264 by default.
timestamps (bool) – if True, (timestamp, frame) tuples will be put in the
_write_q. if False, timestamps will be generated by
Video_Writer(not recommended at all).
blosc (bool) – if true, compress frames with
blosc.pack_array()before putting in
_write_q.
- queue(queue_size=128)[source]
Enable stashing frames in a queue for a local consumer.
Other objects can get frames as they are acquired from
q
- Parameters
queue_size (int) – max number of frames that can be held in
q
- property cam
Camera object.
If
_camhasn’t been initialized yet, use
init_cam()to do so
- Returns
Camera object, different for each camera.
- property output_filename
Filename given to video writer.
If explicitly set, returns as expected.
If None, or path already exists while the camera isn’t capturing, a new filename is generated in the user directory.
- Returns
(str)
_output_filename
- init_cam()[source]
Method to initialize camera object
Must be overridden by camera subclass
- Returns
camera object
- capture_deinit()[source]
Optional: Return
camto an idle state after capturing, but before releasing
- Returns
None
- class PiCamera(camera_idx: int = 0, sensor_mode: int = 0, resolution: Tuple[int, int] = (1280, 720), fps: int = 30, format: str = 'rgb', *args, **kwargs)[source]
Bases:
autopilot.hardware.cameras.Camera
Interface to the Raspberry Pi Camera Module via picamera
Parameters of the
picamera.PiCameraclass can be set after initialization by modifying the
PiCamera.camattribute, eg
PiCamera().cam.exposure_mode = 'fixedfps'– see the
picamera.PiCameradocumentation for full documentation.
Note that some parameters, like resolution, can’t be changed after starting
capture().
The Camera Module is a slippery little thing, and
fpsand
resolutionare just requests to the camera, and aren’t necessarily followed with 100% fidelity. The possible framerates and resolutions are determined by the
sensor_modeparameter, which by default tries to guess the best sensor mode based on the fps and resolution. See the Sensor Modes documentation for more details.
This wrapper uses a subclass,
PiCamera.PiCamera_Writerto capture frames decoded by the gpu directly from the preallocated buffer object. Currently the restoration from the buffer assumes that RGB, or generally
shape[2] == 3, images are being captured. See this stackexchange post by Dave Jones, author of the picamera module, for a strategy for capturing grayscale images quickly.
This class also currently uses the default
Video_Writerobject, but it could be more performant to use the
picamera.PiCamera.start_recording()method’s built-in ability to record video to a file — try it out!
Todo
Currently timestamps are constructed with
datetime.datetime.now.isoformat(), which is not altogether accurate. Timestamps should be gotten from the
frameattribute, which depends on the
clock_mode
References
Fast capture from the author of picamera -
More on fast capture and processing, see last example in section -
- Parameters
camera_idx (int) – Index of picamera (default: 0, >=1 only supported on compute module)
sensor_mode (int) – Sensor mode, default 0 detects automatically from resolution and fps, note that sensor_mode will affect the available resolutions and framerates, see Sensor Modes for more information
resolution (tuple) – a tuple of (width, height) integers, but mind the note in the above documentation regarding the sensor_mode property and resolution
fps (int) – frames per second, but again mind the note on sensor_mode
format (str) – Format passed to :class`picamera.PiCamera.start_recording` one of
('rgb' (default), 'grayscale')The
'grayscale'format uses the
'yuv'format, and extracts the luminance channel
*args () – passed to superclass
**kwargs () – passed to superclass
Attributes:
Methods:
Classes:
- property sensor_mode: int
Sensor mode, default 0 detects automatically from resolution and fps, note that sensor_mode will affect the available resolutions and framerates, see Sensor Modes for more information.
When set, if the camera has been initialized, will change the attribute in
PiCamera.cam
- Returns
int
- property resolution: Tuple[int, int]
A tuple of ints, (width, height).
Resolution can’t be changed while the camera is capturing.
See Sensor Modes for more information re: how resolution relates to
picamera.PiCamera.sensor_mode
- Returns
tuple of ints, (width, height)
- property fps: int
Frames per second
See Sensor Modes for more information re: how fps relates to
picamera.PiCamera.sensor_mode
- Returns
int - fps
- property rotation: int
Rotation of the captured image, derived from
Camera.rotate* 90.
Must be one of
(0, 90, 180, 270)
Rotation can be changed during capture
- Returns
int - Current rotation
- init_cam() picamera.PiCamera [source]
Initialize and return the
picamera.PiCameraobject.
Uses the stored
camera_idx,
resolution,
fps, and
sensor_modeattributes on init.
- Returns
-
- capture_init()[source]
Spawn a
PiCamera.PiCamera_Writerobject to
PiCamera._picam_writerand
start_recording()in the set
format
- capture_deinit()[source]
stop_recording()and
close()the camera, releasing its resources.
- release()[source]
Release resources held by Camera.
Must be overridden by subclass.
Does not raise exception in case some general camera release logic should be put here…
- class PiCamera_Writer(resolution: Tuple[int, int], format: str = 'rgb')[source]
Writer object for processing individual frames, see:
- Parameters
resolution (tuple) – (width, height) tuple used when making numpy array from buffer
- Variables
grab_event (
threading.Event) – Event set whenever a new frame is captured, cleared by the parent class when the frame is consumed.
frame (
numpy.ndarray) – Captured frame
timestamp (str) – Isoformatted timestamp of time of capture.
Methods:
- class Camera_CV(camera_idx=0, **kwargs)[source]
Bases:
autopilot.hardware.cameras.Camera
Capture Video from a webcam with OpenCV
By default, OpenCV will select a suitable backend for the indicated camera. Some backends have difficulty operating multiple cameras at once, so the performance of this class will be variable depending on camera type.
Note
OpenCV must be installed to use this class! A Prebuilt opencv binary is available for the raspberry pi, but it doesn’t take advantage of some performance-enhancements available to OpenCV. Use
autopilot.setup.run_script opencvto compile OpenCV with these enhancements.
If your camera isn’t working and you’re using v4l2, to print debugging information you can run:
# set the debug log level echo 3 > /sys/class/video4linux/videox/dev_debug # check logs dmesg
- Parameters
-
- Variables
camera_idx (int) – The index of the desired camera
last_opencv_init (float) – See
OPENCV_LAST_INIT_TIME
last_init_lock (
threading.Lock) – Lock for setting
last_opencv_init
Attributes:
Methods:
- property fps
Attempts to get FPS with
cv2.CAP_PROP_FPS, uses 30fps as a default
- Returns
framerate
- Return type
-
- property shape
Attempts to get image shape from
cv2.CAP_PROP_FRAME_WIDTHand
HEIGHT:returns: (width, height) :rtype: tuple
- property backend
capture backend used by OpenCV for this camera
- Returns
name of capture backend used by OpenCV for this camera
- Return type
-
- init_cam()[source]
Initializes OpenCV Camera
To avoid overlapping resource allocation requests, checks the last time any
Camera_CVobject was instantiated and makes sure it has been at least 2 seconds since then.
- Returns
camera object
- Return type
cv2.VideoCapture
- release()[source]
Release resources held by Camera.
Must be overridden by subclass.
Does not raise exception in case some general camera release logic should be put here…
- class Camera_Spinnaker(serial=None, camera_idx=None, **kwargs)[source]
Bases:
autopilot.hardware.cameras.Camera
Capture video from a FLIR brand camera with the Spinnaker SDK.
- Parameters
-
Note
PySpin and the Spinnaker SDK must be installed to use this class. Please use the
install_pyspin.shscript in
setup
See the documentation for the Spinnaker SDK and PySpin here:
- Variables
serial (str) – Serial number of desired camera
camera_idx (int) – If no serial provided, select camera by index. Using
serialis HIGHLY RECOMMENDED.
system (
PySpin.System) – The PySpin System object
cam_list (
PySpin.CameraList) – The list of PySpin Cameras available to the system
nmap – A reference to the nodemap from the GenICam XML description of the device
The directory and base filename that images will be written to if object is
writing. eg:
base_path = ‘/home/user/capture_directory/capture_’ image_path = base_path + ‘image1.png’
img_opts (
PySpin.PNGOption) – Options for saving .png images, made by
write()
Attributes:
Methods:
- init_cam()[source]
Initialize the Spinnaker Camera
Initializes the camera, system, cam_list, node map, and the camera methods and attributes used by
get()and
set()
- Returns
The Spinnaker camera object
- Return type
PySpin.Camera
- capture_init()[source]
Prepare the camera for acquisition
calls the camera’s
BeginAcquisitionmethod and populate
shape
- write(output_filename=None, timestamps=True, blosc=True)[source]
Sets camera to save acquired images to a directory for later encoding.
For performance, rather than encoding during acquisition, save each image as a (lossless) .png image in a directory generated by
output_filename.
After capturing is complete, a
Directory_Writerencodes the images to an x264 encoded .mp4 video.
- Parameters
output_filename (str) – Directory to write images to. If None (default), generated by
output_filename
timestamps (bool) – Not used, timestamps are always appended to filenames.
blosc (bool) – Not used, images are directly saved.
- property bin
Camera Binning.
Attempts to bin on-device, and use averaging if possible. If averaging not available, uses summation.
- Parameters
tuple – tuple of integers, (Horizontal, Vertical binning)
- Returns
(Horizontal, Vertical binning)
- Return type
-
- property exposure
Set Exposure of camera
Can be set with
'auto'- automatic exposure control. note that this will limit framerate
floatfrom 0-1 - exposure duration proportional to fps. eg. if fps = 10, setting exposure = 0.5 means exposure will be set as 50ms
floator
int>1 - absolute exposure time in microseconds
- Returns
If exposure has been set, return set value. Otherwise return
.get('ExposureTime')
- Return type
-
- property fps
Acquisition Framerate
Set with integer. If set with None, ignored (superclass sets FPS to None on init)
- Returns
from
cam.AcquisitionFrameRate.GetValue()
- Return type
-
- property frame_trigger
Set camera to lead or follow hardware triggers
If
'lead', Camera will send TTL pulses from Line 2.
If
'follow', Camera will follow triggers from Line 3.
- property acquisition_mode
Image acquisition mode
One of
'continuous'- continuously acquire frame camera
'single'- acquire a single frame
'multi'- acquire a finite number of frames.
Warning
Only
'continuous'has been tested.
- property readable_attributes
All device attributes that are currently readable with
get()
- Returns
A dictionary of attributes that are readable and their current values
- Return type
-
- property writable_attributes
All device attributes that are currently writeable wth
set()
- Returns
A dictionary of attributes that are writeable and their current values
- Return type
-
- get(attr)[source]
Get a camera attribute.
Any value in
readable_attributescan be read. Attempts to get numeric values with
.GetValue, otherwise gets a string with
.ToString, so be cautious with types.
If
attris a method (ie. in
._camera_methods, execute the method and return the value
- Parameters
attr (str) – Name of a readable attribute or executable method
- Returns
Value of
attr
- Return type
-
- set(attr, val)[source]
Set a camera attribute
Any value in
writeable_attributescan be set. If attribute has a
.SetValuemethod, (ie. accepts numeric values), attempt to use it, otherwise use
.FromString.
- Parameters
attr (str) – Name of attribute to be set
val (str, int, float) – Value to set attribute
- list_options(name)[source]
List the possible values of a camera attribute.
- Parameters
name (str) – name of attribute to query
- Returns
Dictionary with {available options: descriptions}
- Return type
-
- property device_info
Get all information about the camera
Note that this is distinct from camera attributes like fps, instead this is information like serial number, version, firmware revision, etc.
- Returns
{feature name: feature value}
- Return type
-
- class Video_Writer(q, path, fps=None, timestamps=True, blosc=True)[source]
Bases:
multiprocessing.context.Process
Encode frames as they are acquired in a separate process.
Must call
start()after initialization to begin encoding.
Encoding continues until ‘END’ is put in
q.
Timestamps are saved in a .csv file with the same path as the video.
- Parameters
q (
Queue) – Queue into which frames will be dumped
path (str) – output path of video
fps (int) – framerate of output video
timestamps (bool) – if True (default), input will be of form (timestamp, frame). if False, input will just be frames and timestamps will be generated as the frame is encoded (not recommended)
blosc (bool) – if True, frames in the
qwill be compresed with blosc. if False, uncompressed
- Variables
timestamps (list) – Timestamps for frames, written to .csv on completion of encoding
Methods:
- run()[source]
Open a
skvideo.io.FFmpegWriterand begin processing frames from
q
Should not be called by itself, overwrites the
multiprocessing.Process.run()method, so should call
Video_Writer.start()
Continue encoding until ‘END’ put in queue. | https://docs.auto-pi-lot.com/en/main/hardware/cameras.html | 2022-08-07T23:01:49 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.auto-pi-lot.com |
Transformations
Data transformations.
Composable transformations from one representation of data to another.
Used as the lubricant and glue between hardware objects. Some hardware objects
disagree about the way information should be represented – eg. cameras are very
partial to letting position information remain latent in a frame of a video, but
some other object might want the actual
[x,y] coordinates. Transformations help
negotiate (but don’t resolve their irreparably different worldviews :( )
Transformations are organized by modality, but this API is quite immature.
Transformations have a
process method that accepts and returns a single object.
They must also define the format of their inputs and outputs (
format_in
and
format_out). That API is also a sketch.
The
__add__() method allows transforms to be combined, eg.:
from autopilot import transform as t transform_me = t.Image.DLC('model_directory') transform_me += t.selection.DLCSlice('point') transform_me.process(frame) # ... etcetera
Todo
This is a first draft of this module and it purely synchronous at the moment. It will be expanded to … * support multiple asynchronous processing rhythms * support automatic value coercion * make recursion checks – make sure a child hasn’t already been added to a processing chain. * idk participate at home! list your own shortcomings of this module, don’t be shy it likes it.
Functions:
- make_transform(transforms: Union[List[dict], Tuple[dict]]) autopilot.transform.transforms.Transform [source]
Make a transform from a list of iterator specifications.
- Parameters
transforms (list) –
A list of
Transforms and parameterizations in the form:
[ {'transform': Transform, 'args': (arg1, arg2,), # optional 'kwargs': {'key1':'val1', ...}, # optional {'transform': ...} ]
- Returns
Transform
Data transformations.
Experimental module.
Reusable transformations from one representation of data to another. eg. converting frames of a video to locations of objects, or locations of objects to area labels
Todo
This is a preliminary module and it purely synchronous at the moment. It will be expanded to … * support multiple asynchronous processing rhythms * support automatic value coercion
The following design features need to be added * recursion checks – make sure a child hasn’t already been added to a processing chain.
Classes:
- class TransformRhythm(value)[source]
- Variables
FIFO – First-in-first-out, process inputs as they are received, potentially slowing down the transformation pipeline
FILO – First-in-last-out, process the most recent input, ignoring previous (lossy transformation)
Attributes:
- class Transform(rhythm: autopilot.transform.transforms.TransformRhythm = <TransformRhythm.FILO: 2>, *args, **kwargs)[source]
Metaclass for data transformations
Each subclass should define the following
process()- a method that takes the input of the transoformation as its single argument and returns the transformed output
format_in- a dict that specifies the input format
format_out- a dict that specifies the output format
- Parameters
rhythm (
TransformRhythm) – A rhythm by which the transformation object processes its inputs
- Variables
(class (child) – Transform): Another Transform object chained after this one
Attributes:
Methods:
- property rhythm: autopilot.transform.transforms.TransformRhythm
- property parent: Optional[autopilot.transform.transforms.Transform]
If this Transform is in a chain of transforms, the transform that precedes it
- check_compatible(child: autopilot.transform.transforms.Transform)[source]
Check that this Transformation’s
format_outis compatible with another’s
format_in
Todo
Check for types that can be automatically coerced into one another and set
_coercionto appropriate function | https://docs.auto-pi-lot.com/en/main/transform/index.html | 2022-08-07T22:33:58 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.auto-pi-lot.com |
Custom AWS Lambda runtimes
You can implement an AWS Lambda runtime in any programming language. A runtime is a program that runs a Lambda
function's handler method when the function is invoked. You can include a runtime in your function's deployment
package in the form of an executable file named
bootstrap.
A runtime is responsible for running the function's setup code, reading the handler name from an environment variable, and reading invocation events from the Lambda runtime API. The runtime passes the event data to the function handler, and posts the response from the handler back to Lambda.
Your custom runtime runs in the standard Lambda execution environment. It can be a shell script, a script in a language that's included in Amazon Linux, or a binary executable file that's compiled in Amazon Linux.
To get started with custom runtimes, see Tutorial – Publishing a custom runtime. You can also explore a custom runtime implemented in C++ at awslabs/aws-lambda-cpp
Using a custom runtime
To use a custom runtime, set your function's runtime to
provided. The runtime can be included in
your function's deployment package, or in a layer.
Example function.zip
. ├── bootstrap ├── function.sh
If there's a file named
bootstrap in your deployment package, Lambda runs that file.
If not, Lambda looks for a runtime in the function's layers. If the bootstrap file isn't found or isn't executable,
your function returns an error upon invocation.
Building a custom runtime
A custom runtime's entry point is an executable file named
bootstrap. The bootstrap file
can be the runtime, or it can invoke another file that creates the runtime. The following example uses a bundled
version of Node.js to run a JavaScript runtime in a separate file named
runtime.js.
Example bootstrap
#!/bin/sh cd $LAMBDA_TASK_ROOT ./node-v11.1.0-linux-x64/bin/node runtime.js
Your runtime code is responsible for completing some initialization tasks. Then it processes invocation events in a loop until it's terminated. The initialization tasks run once per instance of the function to prepare the environment to handle invocations.
Initialization tasks
Retrieve settings – Read environment variables to get details about the function and environment.
_HANDLER– The location to the handler, from the function's configuration. The standard format is
, where
file.
method Defined runtime environment variables.
Initialization counts towards billed execution time and timeout. When an execution triggers the initialization of a new instance of your function, you can see the initialization time in the logs and AWS X-Ray trace.
Example log
REPORT RequestId: f8ac1208... Init Duration: 48.26 ms Duration: 237.17 ms Billed Duration: 300 ms Memory Size: 128 MB Max Memory Used: 26 MB
While it runs, a runtime uses the Lambda runtime interface to manage incoming events and report errors. After completing initialization tasks, the runtime processes incoming events in a loop. In your runtime code, perform the following steps in order.
Processing tasks locally with the same value. The X-Ray SDK uses this value to connect trace data between services..
You can include the runtime in your function's deployment package, or distribute the runtime separately in a function layer. For an example walkthrough, see Tutorial – Publishing a custom runtime. | https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html | 2022-08-07T23:58:34 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.aws.amazon.com |
介.
Editor Layout).
- Header
This region displays menus and buttons for interacting with the editor. The header changes slightly depending on the selected view type (see below).
- 預覽
This region shows the output of the Sequencer at the time of the Playhead.
- Sequencer
This region shows timeline for managing the montage of strips.
- 屬性
This region shows the properties of the active strip. Is divided into panels and tabs. Toggle on or off with N key.
- Toolbar
This region shows a list of icons, clicking on a icon will changes the active tool. Toggle on or off with T key.
View Types
The Video Sequencer has three view types which can be changed with the View Type menu (see figure 1; top left).
Figure 2: Three view types for the Video Sequence Editor
- Sequencer
View timeline and strip properties.
- 預覽
View preview window and preview properties.
- Sequencer & Preview
Combined view of preview and timeline and properties of both.
Tip
It is possible to create multiple instances of any view type in single workspace.
Performance. | https://docs.blender.org/manual/zh-hant/dev/editors/video_sequencer/introduction.html | 2022-08-07T22:01:50 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.blender.org |
Overview
In this document, we will go over the process of obtaining your API keys from Bokun so that you are able to put them into the Getlocal Portal and begin importing your products. The API keys are required to act as the bridge between the Getlocal portal and the Bokun inventory system, which is what allows information to flow between the two.
Step One: Sign up to/Log into Bokun
As well as a Getlocal Portal account, you will also require an account with an inventory provider. In this instance, we will be assuming that the inventory provider is the Tripadvisor Bokun system.
Navigate to your Bokun account by going to company-name.bokun.io and logging in with your user account details. Alternatively, go to bokun.io and sign up for an account with them.
Step Two: Create a Booking Channel
Before we can generate the API keys, it is important to make sure that the settings behind them are done properly. By creating a booking channel, you have the advantage of being able to easily track the sales from your Getlocal website, and separate them from sales from other sources. You are also able to set a number of important settings that determine how things are priced.
To get to these settings, navigate to the left-hand menu of Bokun, and click the Online Sales drop-down tab. This will open a collection of links, and the Booking Channel link will be found here.
On the Booking Channel page, click the blue button on in the top left that says "New Channel".
You will first be requested to give the channel a name. We recommend naming the channel with the URL or planned URL of your final site, or something along the lines of 'Getlocal Website', so that it can be easily identified.
Once the channel has been named, you will gain access to all the assorted settings that you can change - but there are a few key ones that we are going to focus on here.
- Pricing and Payments: This is the next option in the list. Here you are able to set a payment provider that you are using, as well as change settings for how payments are taken. If you intend on taking payments through Bokun, then it is wise to put the payment provider settings into here. Be sure to also select the "Allow all currencies" option that will appear as a toggle on this page.
Beyond this, the rest of the options are not important at this time. However, it can be worth going over them, in order to become aware of the control you have over the payments that will come through this channel.
Step Three: Create the API Keys
Now that you have the underlying settings correctly set up, we can create the API keys. If you already have an API key set up that you want to use, move to step four.
Please note that this feature is locked behind a Bokun Pro subscription - you will require Bokun Pro to continue.. Navigate to the "Other Settings" section, under which you will see the option API Keys, which should be the first option available to you.
In the API Keys settings screen, you will again see a blue button in the top right that says "Add".
After clicking the blue “Add” button, you will be asked to enter a title for this key. Make sure it is something memorable that you can easily identify as being for your Getlocal site. There is a toggle for "Allow offline payment", which you will want to make sure is turned off (it should be off by default). The "User Role" should be left as Admin, and then finally, select the booking channel you created in step two as the booking channel.
Once the above details are entered, you can then press the blue "Save" button to create the keys.
Step Four: Connect the Keys
Once you have created the API key, or if you have an API key already set up that you want to use, then the next step is to get the keys and connect them.. You will want the "Other Settings" section, under which you will see the option API Keys, which should be the first option available to you.
From the API settings screen, you should see a list of all the available API keys that you currently have. Click the title of one that you have chosen to use with your Getlocal site. You will now be brought to a screen that looks similar to the screen for creating the API keys, however now there will be additional information: namely, you will see that you now have access to the "Access Key" and "Secret Key". These are the Keys that you will need to paste into the Getlocal Portal in the relevant area in order to connect to your Bokun account and start importing your inventory into your site. | https://docs.getlocal.travel/concepts/get-api-keys-from-bokun | 2022-08-07T21:53:54 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.getlocal.travel |
Welcome to the inputs documentation!¶
Release v0.5
Inputs aims to provide cross-platform Python support for keyboards, mice and gamepads.
This site covers the usage of Inputs. To see the source code see the main project website on Github.
The User Guide¶
We begin with some background information about Inputs, then focus on step-by-step instructions for getting the most out of Inputs.
Developer Information¶
If you want to contribute to the project, this part of the documentation is for you. | https://inputs.readthedocs.io/en/latest/ | 2022-08-07T22:08:17 | CC-MAIN-2022-33 | 1659882570730.59 | [] | inputs.readthedocs.io |
The association struct for
many_to_many associations.
Its fields are:
cardinality- The association cardinality
field- The name of the association field on the schema
owner- The schema where the association was defined
related- The schema that is associated
owner_key- The key on the
ownersche
:child
join_keys- The keyword list with many to many join keys
join_through- Atom (representing a schema) or a string (representing a table) for many to many associations
© 2012 Plataformatec
Licensed under the Apache License, Version 2.0. | http://docs.w3cub.com/phoenix/ecto/ecto.association.manytomany/ | 2017-08-16T21:27:36 | CC-MAIN-2017-34 | 1502886102663.36 | [] | docs.w3cub.com |
pyexcel-handsontable - Let you focus on data, instead of file formats¶
Introduction¶')
Alternatively, you can use this library with pyexcel cli module:
$ pip install pyexcel-cli $ pyexcel transcode your.xls your.handsontable.html
Please remember to give this file suffix always: handsontable.html. It is because handsontable.html triggers this plugin in pyexcel.
Known constraints¶
Fonts, colors and charts are not supported.
Installation¶
You can install it via pip:
$ pip install pyexcel-handsontable
or clone it and install it:
$ git clone $ cd pyexcel-handsontable $ python setup.py install
Rendering Options¶
You can pass the following options to
pyexcel.Sheet.save_as() and
pyexcel.Book.save_as(). The same options are applicable to
pyexcel’s signature functions, but please remember to add ‘dest_‘ prefix.
js_url The default url for handsontable javascript file points to cdnjs version 0.31.0. You can replace it with your custom url
css_url The default url for handsontable style sheet points to cdnjs version 0.31.0. You can replace it with your custom url
embed If it is set true, the resulting html will only contain a portion of HTML without the HTML header. And it is expected that you, as the developer to provide the necessary HTML header in your web page.
What’s more, you could apply all handsontable’s options to the rendering too. for example, ‘readOnly’ was set to True as default in this library. In the demo, ‘readOnly’ was overridden as False. | http://pyexcel-handsontable.readthedocs.io/en/latest/ | 2017-08-16T21:25:49 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['https://github.com/pyexcel/pyexcel-handsontable/raw/master/demo/screenshot.png',
'https://github.com/pyexcel/pyexcel-handsontable/raw/master/demo/screenshot.png'],
dtype=object) ] | pyexcel-handsontable.readthedocs.io |
Chainer – A flexible framework of neural networks¶
- Export Chainer to ONNX
Other
Indices and tables¶
Community | https://docs.chainer.org/en/stable/ | 2019-12-05T20:41:40 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.chainer.org |
nodetool proxyhistograms
Provides a histogram of network statistics.
Provides a histogram of network statistics at the time of the command.
Synopsis
nodetool <options> proxyhistograms
DataStax Enterprise 5.0 Installer No-Services and tarball installations:
installation_location/resources/cassandra/bin
Description
The output of this command shows the full request latency recorded by the coordinator. The output includes the percentile rank of read and write latency values for inter-node communication. Typically, you use the command to see if requests encounter a slow node.
ExamplesThis example shows the output from nodetool proxyhistograms after running 4,500 insert statements and 45,000 select statements on a three ccm node-cluster on a local computer. | https://docs.datastax.com/en/cassandra/3.0/cassandra/tools/toolsProxyHistograms.html | 2019-12-05T19:37:14 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.datastax.com |
Difference between revisions of "Release Notes/099/2019.10000"
Revision as of 12:01, 4 November 2019
Current Build 2019.19930 - Download Here
See our Spring 2019 Official Announcement for an overview of new features.
For experimental release notes see 2019.30000 Experimental
Contents
- 1 Official Build - 2019.10000 Series
- 1.1 New Features
- 1.2 New Python
- 1.3 New Palette
- 1.4 SDK and API Updates
- 1.5 Bug Fixes and Improvements
- 1.6 Backwards Compatibility
- 2 Build 2019.19930 - Nov 1, 2019
- 3 Build 2019.19160 - Sep 18, 2019
- 4 Build 2019.18580 - Aug 23, 2019
- 5 Build 2019.17550 - Jul 24, 2019
- 6 Build 2019.16600 - Jun 21, 2019
- 7 Build 2019.15840 - May 31, 2019
- 8 Build 2019.15230 - May 15, 2019
- 9 Official Build 2019.14650 - May 06, 2019
- 10 Experimental Builds 2019.10000 / 2018.40000 - April 09, 2019
- 11. See Snippets for the Bullet Solver.
- quickly.
- To get you specific to Snippets faster, right-click on a node or the OP Create dialog and select OP Snippets for that node...
A 3D viewport for viewing and manipulating 3D scenes or objects interactively. A geometry viewer can be found in Panes (alt+3 in any pane) or the Node Viewers of all Geometry Object components.
The viewer of a node can be (1) the interior of a node (the Node Viewer), (2) a floating window (RMB->View... on node), or (3) a Pane that graphically shows the results of an operator.
A parameter in most CHOPs that restricts which channels of that CHOP will be affected. Normally all channels of a CHOP are affected by the operator.. | https://docs.derivative.ca/index.php?title=Release_Notes/099/2019.10000&diff=17262&oldid=15718 | 2019-12-05T20:55:42 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.derivative.ca |
External sharing overview
The external sharing features of SharePoint Online let users in your organization share content with people outside the organization (such as partners, vendors, clients, or customers). You can also use external sharing to share between licensed users on multiple Office 365 subscriptions if your organization has more than one subscription. Planning for external sharing should be included as part of your overall permissions planning for SharePoint Online. This article describes what happens when users share, depending on what they're sharing and with whom.
If you want to get straight to setting up sharing, choose the scenario you want to enable:
- Collaborate with guests on a document
- Collaborate with guests in a site
- Collaborate with guests in a team
(If you're trying to share a file or folder, see Share OneDrive files and folders or Share SharePoint files or folders in Office 365.)
Note
External sharing is turned on by default for your entire SharePoint Online environment and the sites in it. You may want to turn it off globally before people start using sites or until you know exactly how you want to use the feature.
How the external sharing settings work
SharePoint Online has external sharing settings at both the organization level and the site level (previously called the "site collection" level). To allow external sharing on any site, you must allow it at the organization level. You can then restrict external sharing for other sites. If a site's external sharing option and the organization-level sharing option don't match, the most restrictive value will always be applied.
Whichever option you choose at the organization or site level, the more restrictive functionality is still available. For example, if you choose to allow sharing using "Anyone" links (previously called "shareable" links or "anonymous access" links), users can still share with guests who sign in, and with internal users.
Important
Even if your organization-level setting allows external sharing, not all new sites allow it by default. The default sharing setting for Office 365 group-connected team sites is "New and existing guests." The default for communication sites and classic sites is "Only people in your organization.".
Note
To limit internal sharing of contents on a site, you can prevent site members from sharing, and enable access requests. For info, see Set up and manage access requests.
When users share a folder with multiple guests, the guests will be able to see each other's names in the Manage Access panel for the folder (and any items within it).
Sharing Office 365 group-connected team sites
When you or your users create Office 365 groups (for example in Outlook, or by creating a team in Microsoft Teams), a SharePoint team site is created. Admins and users can also create team sites in SharePoint, which creates an Office 365 group. For group-connected team sites, the group owners are added as site owners, and the group members are added as site members. In most cases, you'll want to share these sites by adding people to the Office 365 group. However, you can share only the site.
Important
It's important that all group members have permission to access the team site. If you remove the group's permission, many collaboration tasks (such as sharing files in Teams chats) won't work. Only add guests to the group if you want them to be able to access the site. For info about guest access to Office 365 groups, see Manage guest access in Groups.
What happens when users share
When users share with people outside the organization, an invitation is sent to the person in email, which contains a link to the shared item.
Recipients who sign in
When users share sites, recipients will be prompted to sign in with:
- A Microsoft account
- A work or school account in Azure AD from another organization
When users share files and folders, recipients will also be prompted to sign in if they have:
- A Microsoft account
These recipients will typically be added to your directory as guests, and then permissions and groups work the same for these guests as they do for internal users. (To ensure that all guests are added to your directory, use the SharePoint and OneDrive integration with Azure AD B2B preview.)
Because these guests do not have a license in your organization, they are limited to basic collaboration tasks:
They can use Office.com for viewing and editing documents. If your plan includes Office Professional Plus, they can't install the desktop version of Office on their own computers unless you assign them a license.
They can perform tasks on a site based on the permission level that they've been given. For example, if you add a guest as a site member, they will have Edit permissions and they will be able to add, edit and delete lists; they will also be able to view, add, update and delete list items and files.
They will be able to see other types of content on sites, depending on the permissions they've been given. For example, they can navigate to different subsites within a shared site. They will also be able to do things like view site feeds.
If your authenticated guests need greater capability such as OneDrive storage or creating a Power Automate flow, you must assign them an appropriate license. To do this, sign in to the Microsoft 365 admin center as a global admin, make sure the Preview is off, go to the Active users page, select the guest, click More, and then click Edit product licenses.
Recipients who provide a verification code
When users share files or folders, recipients will be asked to enter a verification code if they have:
- A work or school account in Azure AD from another organization
- An email address that isn't a Microsoft account or a work or school account in Azure AD
If the recipient has a work or school account, they only need to enter the code the first time. Then they will be added as a guest and can sign in with their organization's user name and password.
If the recipient doesn't have a work or school account, they need to use a code each time they access the file or folder, and they are not added to your directory.
Note
Sites can't be shared with people unless they have a Microsoft account or a work or school account in Azure AD.
Recipients who don't need to authenticate
Anyone with the link (inside or outside your organization) can access files and folders without having to sign in or provide a code. These links can be freely passed around and are valid until the link is deleted or expires (if you've set an expiration date). You cannot verify the identity of the people using these links, but their IP address is recorded in audit logs when they access or edit shared content.
People who access files and folders anonymously through "Anyone" links aren't added to your organization's directory, and you can't assign them licenses. They also can't access sites anonymously. They can only view or edit the specific file or folder for which they have an "Anyone" link.
Stopping sharing
You can stop sharing with guests by removing their permissions from the shared item, or by removing them as a guest in your directory.
You can stop sharing with people who have an "Anyone" link by going to the file or folder that you shared and deleting the link.
Learn how to stop sharing an item
Need more help?
Feedback | https://docs.microsoft.com/en-us/sharepoint/external-sharing-overview?redirectSourcePath=%252fen-us%252farticle%252fManage-external-sharing-for-your-SharePoint-online-environment-c8a462eb-0723-4b0b-8d0a-70feafe4be85) | 2019-12-05T20:23:28 | CC-MAIN-2019-51 | 1575540482038.36 | [array(['sharepointonline/media/sharing-invitation.png',
'A sharing invitation in email'], dtype=object)
array(['sharepointonline/media/sign-in-msa-org.png', 'Sign-in screen'],
dtype=object)
array(['sharepointonline/media/verification-code.png',
'Enter Verification Code screen'], dtype=object)
array(['sharepointonline/media/anyone-link.png',
'Sharing a folder by using an "Anyone" link'], dtype=object)] | docs.microsoft.com |
The
spring-boot-loader modules lets Spring Boot support executable jar and war files.
If you use the Maven plugin or the Gradle plugin, executable jars are automatically generated, and you generally do not need to know the details of how they work.
If you need to create executable jars from a different build system or if you are just curious about the underlying technology, this appendix provides some background.
1. Nested JARs
Java does not provide any standard way to load nested jar files (that is, jar files that are themselves contained within a jar). This can be problematic if you need to distribute a self-contained application that can be run from the command line without unpacking.
To solve this problem, many developers use “shaded” jars. A shaded jar packages all classes, from all jars, into a single “uber jar”. The problem with shaded jars is that it becomes hard to see which libraries are actually in your application. It can also be problematic if the same filename is used (but with different content) in multiple jars. Spring Boot takes a different approach and lets you actually nest jars directly.
1.1. The Executable Jar File Structure.
1.2. The Executable War File Structure
Spring Boot Loader-compatible war files should be structured in the following way:
example.war | +-META-INF | +-MANIFEST.MF +-org | +-springframework | +-boot | +-loader | +-<spring boot loader classes> +-WEB-INF +-classes | +-com | +-mycompany | +-project | +-Your.
2. Spring Boot’s “JarFile” Class
The core class used to support loading nested jars is
org.springframework.boot.loader.jar.JarFile.
It lets you load jar content from a standard jar file or from nested child jar data.
When first loaded, the location of each
JarEntry is mapped to a physical file offset of the outer jar, as shown in the following example:
myapp.jar +-------------------+-------------------------+ | /BOOT-INF/classes | /BOOT-INF/lib/mylib.jar | |+-----------------+||+-----------+----------+| || A.class ||| B.class | C.class || |+-----------------+||+-----------+----------+| +-------------------+-------------------------+ ^ ^ ^ 0063 3452 3980
The preceding example shows how
A.class can be found in
/BOOT-INF/classes in
myapp.jar at position
0063.
B.class from the nested jar can actually be found in
myapp.jar at position
3452, and
C.class is at position
3980.
Armed with this information, we can load specific nested entries by seeking to the appropriate part of the outer jar. We do not need to unpack the archive, and we do not need to read all entry data into memory.
2.1. Compatibility with the Standard Java “JarFile”
Spring Boot Loader strives to remain compatible with existing code and libraries.
org.springframework.boot.loader.jar.JarFile extends from
java.util.jar.JarFile and should work as a drop-in replacement.
The
getURL() method returns a
URL that opens a connection compatible with
java.net.JarURLConnection and can be used with Java’s
URLClassLoader.
3. Launching Executable Jars
The
org.springframework.boot.loader.Launcher class is a special bootstrap class that is used as an executable jar’s main entry point.
It is the actual
Main-Class in your jar file, and it is used to setup an appropriate
URLClassLoader and ultimately call your
main() method.
There are three launcher subclasses (
JarLauncher,
WarLauncher, and
PropertiesLauncher).
Their purpose is to load resources (
.class files and so on) from nested jar files or war files in directories (as opposed to those explicitly on the classpath).
In the case of
JarLauncher and
WarLauncher, the nested paths are fixed.
JarLauncher looks in
BOOT-INF/lib/, and
WarLauncher looks in
WEB-INF/lib/ and
WEB-INF/lib-provided/.
You can add extra jars in those locations if you want more.
The
PropertiesLauncher looks in
BOOT-INF/lib/ in your application archive by default.
You can add additional locations by setting an environment variable called
LOADER_PATH or
loader.path in
loader.properties (which is a comma-separated list of directories, archives, or directories within archives).
3.1. Launcher Manifest
You need to specify an appropriate
Launcher as the
Main-Class attribute of
META-INF/MANIFEST.MF.
The actual class that you want to launch (that is, the class that contains a
main method) should be specified in the
Start-Class attribute.
The following example shows a typical
MANIFEST.MF for an executable jar file:
Main-Class: org.springframework.boot.loader.JarLauncher Start-Class: com.mycompany.project.MyApplication
For a war file, it would be as follows:
Main-Class: org.springframework.boot.loader.WarLauncher Start-Class: com.mycompany.project.MyApplication
4.
PropertiesLauncher Features
PropertiesLauncher has a few special features that can be enabled with external properties (System properties, environment variables, manifest entries, or
loader.properties).
The following table describes these properties:
When specified as environment variables or manifest entries, the following names should be used:
The following rules apply to working with
PropertiesLauncher:
loader.propertiesis searched for in
loader.home, then in the root of the classpath, and then in
classpath:/BOOT-INF/classes. The first location where a file with that name exists is used.
loader.homeis the directory location of an additional properties file (overriding the default) only when
loader.config.locationis not specified.
loader.pathcan contain directories (which are scanned recursively for jar and zip files), archive paths, a directory within an archive that is scanned for jar files (for example,
dependencies.jar!/lib), or wildcard patterns (for the default JVM behavior). Archive paths can be relative to
loader.homeor anywhere in the file system with a
jar:file:prefix.
ZipEntryfor a nested jar must be saved by using the
ZipEntry.STOREDmethod. This is required so that we can seek directly to individual content within the nested jar. The content of the nested jar file itself can still be compressed, as can any other entries in the outer jar.: | https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/html/appendix-executable-jar-format.html | 2019-12-05T19:21:37 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.spring.io |
SQLLine is an open source utility modified by SQLstream to handle streaming data. SQLLine works similarly to other command-line database access utilities, such as sqlplus for Oracle, mysql for MySQL, and psql for PostgreSQL. SQLLine can also connect to other relational database drivers and execute SQL commands on those databases. More information about SQLLine can be found at
This page contains the following sections:
SQLline is supplied either as part of the s-Server installation package or as part of the Client Tools download from the SQLstream website (via SQLstream-
Using SQLstream’s SQLLine, you can do the following:
Three SQLstream scripts provide SQLLine functionality, and each one, once launched, can be used to connect to local or remote databases using a !properties command.
To connect to the local SQLstream s-Server using sqllineClient (on the same machine containing the script), use one of the following two methods:
Go to the SQLSTREAM_HOME/bin directory and use the following command:
sqllineClient (Linux) sqlline.cmd (Windows)
or
If you have installed on Linux as root, double-click the ‘Run sqlline’ icon on the desktop. A terminal window appears, showing the following command being executed:
jdbc:sqlstream:sdp:;sessionName='sqllineClient@/dev/pts/1:demo@bento'
When that command completes, the terminal window shows the following:
Connecting to jdbc:sqlstream:sdp://my-server;sessionName='sqllineClient@/dev/pts/3:drew@drew-VirtualBox' Connected to: SQLstream (<VERSION>) Driver: SQLstreamJdbcDriver (VERSION-distrib) Autocommit status: true Transaction isolation: TRANSACTION_REPEATABLE_READ sqlline version 1.0.13-mb by Marc Prud'hommeaux 0: jdbc:sqlstream:sdp://my-server>
Once you have started SQLLine using one of the scripts, you can connect to database servers by using a command (!connect or !properties or by supplying the connection parameters on the use sqllineRemote as shown in the next section.
Default connection values can be set for all the tools in the clienttools/default.conn.properties file. The properties are specified there as DRIVER, SERVER, NAME, and PASSWORD.
Each of those default values can be overwritten by passing a command-line argument to the script as follows:
Once connected to SQLstream or a database, you can use SQLLine in either of the following ways:
Begin using SQL commands against the current connection
or
Enter SQLLine commands.
All SQLLine commands start with an exclamation point (!).
You can get a list of all SQLLine commands by typing !help , or you can get specific help information on a known command by typing
!help <command name>
You also get specific help information when you enter a command without the parameters it requires.
For example, when you are running SQLline client, if you type !connect without supplying the required parameters, the response is as follows:
0: jdbc:sqlstream:sdp:> !connect Usage: connect <url> <username> <password> [driver]
To minimize typing, SQLLine will complete a partial command you’ve typed if you then press the tab key.
You can reissue commands you have already entered by pressing the Up arrow or Down arrow keys until you see the one you want. You can then edit that command, or simply execute it by pressing Enter. (See also the !history command described below.)
For a complete list of SQLline commands, see Complete SQLLine Command Set below.
SQLstream has found the following settings to be best practices:
Once you launch SQLline client using one of the scripts, you can optionally connect to SQLstream (already done if you used sqllineClient script) or another database. To do this, you can use any one of the following three methods:
You can connect to SQLstream if you use the command !connect myserver after you create a file named “myserver,” containing the following lines specifying server connection properties:
url=jdbc:sqlstream:sdp;sessionName=sqllineClient:sqlstreamuser@localhost driver=com.sqlstream.jdbc.Driver user=myusername password=mypassword
You can also use a !properties mydatabase command to connect to a database after you create a file named “mydatabase,” containing the needed database connection properties. That file’s contents would look similar to the following lines specifying those properties:
url=jdbc:mysql://sevilla:3306/ driver=com.mysql.jdbc.Driver user=sqlstreamusername password=s-serverbpwd
using the database-user-specific password.
Scripts with extension .sql are used in a variety of places in a SQLstream system. You can find many examples in the demo subdirectories of the SQLSTREAM_HOME/demo directory.
Support sql scripts residing in the SQLSTREAM_HOME/support/sql directory enable you to query your database schemas and configurations.
You can use the !run command to execute all such scripts, as for example:
!run <support script name>.sql
Most script names describe what they do. For example, the support scripts include the following
You can run SQLline as a client to any database server that supports JDBC (or to a local database). In other words, the SQLLine scripts enable command-line connection to a relational database to execute SQL commands.
This section illustrates how to establish such connections, using sqllineRemote as the example script. (The example assumes you have navigated to the directory $SQLSTREAM_HOME/bin, where this script resides.)
SqllineRemote uses Aspen runtime Jars for access to drivers.
On Linux, you can pass one or more connection-properties files to connect to the remote server(s):
./sqllineRemote file1 file2 ...
As a convenience, a connection-properties file names of the form
myserver.conn.properties
can be referenced as simply “myserver”:
./sqllineRemote myserver
To find files like myserver.conn.properties, sqllineRemote must execute in the directory in which it was started. Only then can !run find the properties file in that directory.
Create a <database>.conn.properties file with the following entries (supply your own password):
url=jdbc:<database>://myhostname driver=org.<database>.Driver user=<database> password=
To connect to a particular database, append it to the URL, for example,
jdbc:
./sqllineRemote <database>
Test it by using sqllineRemote for table access.
"................>" ?
The “greater than” sign (>) is a continuation prompt, enabling you to continue the statement you just entered without providing a final semicolon.
To continue and complete that statement, type the rest of the statement after that prompt, and then press Enter. To cancel the statement instead, type ; (semicolon) and press Enter to get the usual prompt back.
You may have the incremental parameter set to false, the default.
This parameter needs to be set to true:
!set incremental true;
You can view s-Server’s massive raw performance by using a script that generates data using a SQL “VALUES” clause. (The rate at which s-Server ingests data is generally much slower than the rate at which it processes data.) s-Server ships with an example script in _$SQLSTREAMHOME/examples/parallelPerformanceTest.
To run the script on a Linux machine, navigate to $SQLSTREAM_HOME/examples/parallelPerformanceTest and enter the following:
genParallelPerformanceTest.py %N
where “%N” is the number of pipelines you want to run. This number should correspond with the number of cores on your server. For example, to run two pipelines, you would enter
./genParallelPerformanceTest.py 2
When you run genParallelPerformanceTest.py, it generates the following four SQL scripts in your current directory:
In these scripts, N pipelines are used to count rows, grouped by an integer column. Each pipeline aggregates its input, outputting every millisecond. A final query then sums those together, outputting every minute.
You can invoke these scripts by opening SQLLine and entering
sqlline --run=<script name>
You should run these scripts in order.
For example, if you ran the script with two pipelines, you would navigate to $SQLSTREAM_HOME/bin, open SQLLine, and enter the following lines, one at a time:
!run ../examples/parallelPerformanceTest/setup2pipelines.sql !run ../examples/parallelPerformanceTest/startpumps2pipelines.sql !run ../examples/parallelPerformanceTest/listen2pipelines.sql
When you run listen2pipelines.sql, you will see something like the following:
Each line represents the number of rows per half minute. To stop the pumps, enter the following
!run ../examples/parallelPerformanceTest/stoppumps2pipelines.sql
The following alphabetic list provides brief explanations for all SQLLine commands. Some commands interrelate. | http://docs.sqlstream.com/sqlline/ | 2019-12-05T19:48:09 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.sqlstream.com |
).
All of the requests below are appended to an http request, as in
where "myserver:5580"
Requests a status update from webAgent itself. Includes the amount of memory it is using and lists of the active threads, webAgentsessions, and SQLstream connections.
{“message”: <status message>, “allocatedMemory”: <total memory in the JVM>, “maxMemory”: <maximum memory available to the JVM>, “freeMemory”: <free memory in the JVM>, “threads”: [<list of thread names>], “sessions”: [<list of webAgent session IDs>], “connections”: [<list of SQLstream connections>] }
{"message":"OK","maxMemory":129957888,"allocatedMemory":85000192,"freeMemory":78921232,"threads":["main","Poller SunPKCS11-Darwin","HashSessionScavenger-0","qtp1405643749-13 Acceptor0 [email protected]:5580 STARTED","qtp1405643749-14","qtp1405643749-15","qtp1405643749-16","qtp1405643749-17","qtp1405643749-18","qtp1405643749-19","qtp1405643749-20"],"sessions":[],"connections":[]}
Requests metadata for the contents of the SQLstream catalog. Replies with a list of the SQL objects present, either for the entire catalog or for a specified schema.
{“host”: <SQLstream host>, “port”: <SQLstream port>, “errorcode”: <error code>, “SQLstate”: <SQL state code>, “exceptionClass”: <exception thrown>, “message”: <error message>, “sqlobjects”: [ {“schema”: <schema name>, “name”: <object name>, “type”: <type name> }, … ] }
{"host":"myserver","port":5570,"errorCode":0,"SQLState":"00000","exceptionClass":"","message":"","sqlobjects":[{"schema":"AccessLog","name":"FrequentLocationsDescCSV","type":"FOREIGN STREAM"},{"schema":"AccessLog","name":"HitsPerHourCSV","type":"FOREIGN STREAM"},{"schema":"AccessLog","name":"HitsPerLocationCSV","type":"FOREIGN STREAM"},{"schema":"AccessLog","name":"LogStreamLFAd","type":"FOREIGN STREAM"},{"schema":"AccessLog","name":"LocationStream_","type":"STREAM"},{"schema":"AccessLog","name":"LogStreamRaw","type":"STREAM"},{"schema":"AccessLog","name":"LogStreamValid","type":"VIEW"},{"schema":"AccessLog","name":"LogStreamWithLocation","type":"VIEW"},]}
Requests metadata for the columns of a SQLstream object.
“columns”: [ {“name”: <column name>, “index”: <column index (starts with 1)>, “type”: <SQL type of column>, “precision”: <SQL precision>, “scale”: <SQL scale>, “nullable”: <true if column accepts nulls> }, … ]
{"columns":[{"name":"time","index":1,"type":"BIGINT","precision":0,"scale":0,"nullable":true},{"name":"ticker","index":2,"type":"VARCHAR","precision":5,"scale":0,"nullable":true},{"name":"shares","index":3,"type":"INTEGER","precision":0,"scale":0,"nullable":true},]}
Retrieves the contents of a static table.
If includecols is false (default) the rows will be output as an array of arrays, one per row. If true, the rows will be an array of objects, each with properties matching the column names for the stream.
If includecols is false:
[[row as array], …]
If includecols is true:
[{row as object}, …]
[[100,”Fred”,10,””,””,30,25,”Abc”,true,false], [110,”Eric”,20,”M”,”San Francisco”,3,80,”Abc”,null,false], [120,”Wilma”,20,”F”,””,1,50,””,null,true], [110,”John”,40,”M”,”Vancouver”,2,null,”Xyz”,false,true]]
[{"EMPID":30,"MANAGER":false,"NAME":"Fred","AGE":25,"DEPTNO":10,"PUBLIC_KEY":"Abc","EMPNO":100}, {"EMPID":3,"MANAGER":false,"NAME":"Eric","AGE":80,"DEPTNO":20,"PUBLIC_KEY":"Abc","GENDER":"M","CITY":"San Francisco","EMPNO":110}, {"EMPID":1,"MANAGER":true,"NAME":"Wilma","AGE":50,"DEPTNO":20,"PUBLIC_KEY":"","GENDER":"F","CITY":"","EMPNO":120}]
Initiates a session for continuous HTTP GETs from a SQLstream stream or view. The schema and SQL object are required parameters. You can use additional parameters, which are listed below, to control the minimum and maximum number of rows returned per request, how often to poll s-Server, and how long to wait before timing out. The response includes a session ID used when reading rows (see /getcontinuous/:sessionid below) along with status indicators and a list of the columns in the SQL object.
{“host”: <SQLstream host>, “port”: <SQLstream port>, “schema”: <name of schema>, “sqlobject”: <name of SQL object>, “sessionid”: <unique session ID>, “errorcode”: <error code>, “SQLstate”: <SQL state code>, “exceptionClass”: <exception thrown>, “message”: <error message>, “columns”: [ {“name”: <column name>, “index”: <column index (starts with 1)>, “type”: <SQL type of column>, “precision”: <SQL precision>, “scale”: <SQL scale>, “nullable”: <true if column accepts nulls> }, … ] }
{"host":"myserver","port":5570,"schema":"SALES","sqlobject":"BIDS","sessionid":"3ccd342f-4df1-4ffb-ad92-95f1c385673f","errorCode":0,"SQLState":"00000","exceptionClass":"","message":"","columns":[{"name":"ROWTIME","index":1,"type":"TIMESTAMP","precision":0,"scale":0,"nullable":false},{"name":"time","index":2,"type":"BIGINT","precision":19,"scale":0,"nullable":true},{"name":"ticker","index":3,"type":"VARCHAR","precision":5,"scale":0,"nullable":true},{"name":"shares","index":4,"type":"INTEGER","precision":10,"scale":0,"nullable":true},{"name":"price","index":5,"type":"REAL","precision":7,"scale":0,"nullable":true},{"name":"expiryMs","index":6,"type":"BIGINT","precision":19,"scale":0,"nullable":true},{"name":"comment","index":7,"type":"VARCHAR","precision":1024,"scale":0,"nullable":true}]}
Given a session ID as returned by /getcontinuous, close the session.
{“error”: <error code>, “message”: <error message> }
{"error":0, "message":""}
Given a session ID as returned by /getcontinuous, read from SQLstream. If at least minrows are queued before the timeout expires, you get a reply containing an error code and the rows as an array of arrays. After the timeout, you get a reply containing 0 to minrows – 1 rows.
If skiprows is true, webAgent will read continuously from SQLstream and discard old rows to prevent more than maxqueue rows from accumulating.
If skiprows is false, webAgent stops reading when the queue is full.
The client then has 16 times the timeout period to request data from this session, if no request is made in that time the session is abandoned and subsequent requests will return an error.
If ‘includecols’ is false (default) the rows will be output as an array of arrays, one per row. If true, the rows will be an array of objects, each with properties matching the column names for the stream.
{“error”: <error code>, “message”: <error message>, “rowCount”: <rows in this response>, “rows”: [ [<rows values as array>] … ] }
{"error":0, "message":"", "rowCount":144,"rows": [ ["Jan 23, 2020 10:04:58 PM",1327361539418,"MSFT",593,18.66,13323515,"sample comment B-2"], ["Jan 23, 2020 10:04:58 PM",1327359486053,"MSFT",443,15.18,13335116,"sample comment B-2"], ["Jan 23, 2020 10:04:58 PM",1327356654079,"SQLS",677,16.12,16721538,"sample comment C-2"], ["Jan 23, 2020 10:04:58 PM",1327361469393,"MSFT",401,16.9,4142586,"sample comment B-2"], ["Jan 23, 2020 10:04:58 PM",1327363275810,"ADBE",465,16.43,8830800,"sample comment A-1"], … ]}
Using this request, you can send SQL commands to s-Server through a websocket. The socket you get from /sqlstream accepts SQL commands and returns the result. This functions roughly like using the SQLline client through a websocket.
To use /sqlstream, you submit an http request to open a socket and receive a unique ID, which is used to construct a websocket URL, as above, as in
ws:///ws/0a46c064-4870-40db-b6ff-22c54ae1525f
Once you submit a request, the return message contains the path for the websocket. The websocket accepts messages consisting of a token and a SQL command. The token is used to identify the reply, is generated by your client, and might be a string containing a serial number, for example. The reply will contain the same token and the response from s-Server./<path>
ws:///ws/0a46c064-4870-40db-b6ff-22c54ae1525f
Receive If the result contains a table (if the SQL was a SELECT statement):
{“token”: <token corresponding to SQL command>, “nrows”: <number of rows in result> “columns”: [ {“name”: <column name>, “index”: <column index (starts with 1)>, “type”: <SQL type of column>, “precision”: <SQL precision>, “scale”: <SQL scale>, “nullable”: <true if column accepts nulls> }, … ], “rows”: [ [<rows values as array>] … ] }
Otherwise, only the number of rows affected is sent:
{“token”: <token corresponding to SQL command>, “nrows”: <number of rows affected by statement>}
If there is an error with the statement, the error is also returned:
{“token”: <token corresponding to SQL command>, “nrows”: 0, “error”: <SQL error message>}
Requests a websocket for receiving rows from SQLstream. The /read socket accepts SQL SELECT statements and replies with a JSON record describing the columns of the table/stream, then each of the rows of the table or stream as separate JSON records.
To open or close and reopen a stream, send the SQL Select statement as the command.
To stop a stream, send “stop” as the command.
By default, each socket supports one query at a time. In order to combine multiple queries over the same websocket, you can add the multiplex parameter to the /read URL and set it to true. Each query must then be accompanied by an arbitrary string tag or “token”. Sending a new query with an existing token cancels the old query associated with that token, then starts the new query. The rows received will arrive inside an object that includes the token, and the token will be included in all other messages (which are already JSON objects).
Note: This option is preferable to opening multiple websockets.
You may send a non-streaming select statement to read from a table or view on a table. The response will be the column info for the table followed by each of the rows of the table, each as a separate message. This stops any stream select that may have been running, so no further output will be sent until another select statement is sent. If multiplexing is enabled, you will receive an end-of-table message when the table has been exhausted. You may select from the same table multiple times, the full table will be sent each time (in contrast, if you issue a SELECT STREAM command and then issue the same command again, you will not get a new columns description, you will continue to get rows from the same stream.”}
{"success":true,"ws":”/ws/0a46c064-4870-40db-b6ff-22c54ae1525f”}
WebSocket
ws://myserver:5580/
Send
{“command”: <SQL select statement>|stop|””} {"command":"SELECT STREAM ROWTIME, * from \"SALES\".\"BIDS\"", "token": 1} {"command":"SELECT STREAM ROWTIME, * from \"SALES\".\"ASKS\"", "token": 2, “skip”: true, “loadLimit”: 12 }
{"command":"SELECT STREAM ROWTIME, * from \"SALES\".\"BIDS\""}
Receive Once, each time a new SELECT statement is sent:
{“token”: <token corresponding to SELECT command>, {“columns”: [ {“name”: <column name>, “index”: <column index (starts with 1)>, “type”: <SQL type of column>, “precision”: <SQL precision>, “scale”: <SQL scale>, “nullable”: <true if column accepts nulls> }, … ]}
For each row in the stream:
[<value>,<value> …]
For each row in the stream (multiplexing):
{“token”: <token corresponding to SELECT command>, “skipped”: <number of rows skipped due to load shedding>, “row”: [<value>,<value> …] }
At the end of a table select (must be multiplexing):
{“token”: <token corresponding to SELECT command>, “total_skipped”: <total number of rows skipped due to load shedding>, “total_rows”: <total number of rows in table> }
If an error occurs with the SQL statement:
{“token”: <token corresponding to SELECT command>, {“errorcode”: <some non-zero value>, “message”: <error message>
Requests a websocket for sending rows to SQLstream. Once you receive the websocket, the client sends a message containing a SQL INSERT statement to open and configure the stream and then sends subsequent messages each containing a single row. A message with “stop” as the command closes the stream./
Note: wss is currently not implemented.
Send
{“command”: <SQL insert statement>|stop|””}
{"command":"insert into \"SALES\".\"BIDS\" (\"ROWTIME\", \"time\", \"ticker\", \"shares\", \"price\", \"expiryMs\", \"comment\") values (?,?,?,?,?,?,?)"}
Receive
Once, each time a new INSERT statement is sent:
{“params”: [ {“index”: <column index (starts with 1)>, “mode”: <insert mode>, “type”: <SQL type of parameter>, “precision”: <SQL precision>, “scale”: <SQL scale>, “nullable”: <true if parameter accepts nulls> “signed”: <true if parameter is signed> }, … ]}
If an error occurs with the SQL statement:
{“errorcode”: <some non-zero value>, “message”: <error message> }
Send a row
[<value>,<value> …]
[“2020-03-09 01:34:56.87",1331262368627,"SQLS",589,19.98,12347529,"sample comment B-2"]
Note: the web UI provided by webAgent supports specifying a row that will contain random values (it can also repeatedly send rows at a given interval). The codes it recognizes are:
For example, this produces a row similar to the one in the example above:
[{{now}},{{nowMS}},{{"SQLS"|"APPL"|"PEAR">}},{{940..950}},{{42.0..44.0}},12347529,"sample comment {{A|B|C}}-{{1..3}}"]
Note that the web UI is doing the substitution. The write socket still expects a standard JSON array.
When you install s-Server, the installer gives you the option of running webAgent as a service called webAgentd. This is the recommended way of running webAgent.
There are two variables defined in /etc/default/webAgentd
webAgent includes a browser-based test tool through which you can run and confirm webAgent requests. To access the tool, enable the -a option when you launch webAgent. The tool is available at port 5580 at whatever host is running webAgent. Each page features a Save to URL option. This lets you copy the URL in the browser and send a test's results to another user. The home page for the tool lets you enter a host and port for s-Server, and then test webAgent's connectivity to s-Server.
To open the API test tools, click the button in the upper right corner of the home page.
The API Tests page lists tests that correspond to each webAgent request. See webAgent requests in the topic webAgent for more details.
When you open a test, its page lets you enter parameters for the test. For example, the \/sqlstream test lets you first enter a user name and password for s-Server, then open a web socket to s-Server.
Once the socket is open, you can enter SQL and send it to s-Server, viewing the results in the box below. Details on parameters for all tests appear in the topic webAgent in this guide. for example, parameters for /sqlstream appear here | http://docs.sqlstream.com/integrating-sqlstream/webagent/ | 2019-12-05T19:18:21 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.sqlstream.com |
Rotating Secrets for Amazon Redshift
You can configure AWS Secrets Manager to automatically rotate the secret for Amazon Redshift. Secrets Manager uses a Lambda function that Secrets Manager provides.
Amazon Redshift is a supported service
Secrets Manager supports Amazon Redshift which means that when you enable rotation, Secrets Manager provides a complete, ready-to-run Lambda rotation function.
When you enable rotation for a Credentials for Redshift as the secret type, Secrets Manager can automatically create and configure a Lambda rotation function for you. Then Secrets Manager equips your secret with the Amazon Resource Name (ARN) of the function. Secrets Manager creates the IAM role associated with the function and configures it with all of the required permissions. Alternatively, if you already have another secret that uses the same rotation strategy you want to use with your new secret, you can specify the ARN of the existing function and use it for both secrets.
If you run your Amazon Redshift cluster in a VPC provided by Amazon VPC and the VPC doesn't have public Internet access, then Secrets Manager also configures the Lambda function to run within the VPC. don't need to communicate with the Internet, you can configure the VPC with a private Secrets Manager service endpoint accessible from within the VPC.
Otherwise, you typically only need to provide a few details to determine which template Secrets Manager uses to construct the Lambda function:
Specify the secret that can use to rotate the credentials:
Use this secret: Choose this option if the credentials in the current secret have permissions to change the password. Choosing this option causes Secrets Manager to implement a Lambda function with a rotation strategy that changes the password for a single user with each rotation. For more information about this rotation strategy, see Rotating AWS Secrets Manager Secrets for One User with a Single Password.
Considerations
This option provides "lower availability". Because sign-in failures can occur between the moment when the rotation removes the old password and the moment when the updated password becomes accessible as the new version of the secret. This time window is typically very short—on the order of a second or less. If you choose this option, make sure that your client applications implement an appropriate "backoff and retry with jitter" strategy in their code. The apps should generate an error only if sign-in fails several times over a longer period of time.
Use a secret that I have previously stored in AWS Secrets Manager: Choose this option if the credentials in the current secret have more restrictive permissions and can't be used to update the credentials on the secured service. Or choose this if you require high availability for the secret. To choose this option, create a separate "master" secret with credentials that have permission to create and update credentials on the secured service. Then choose that master secret from the list. Choosing this option causes Secrets Manager to implement a Lambda function. This Lambda function has a rotation strategy that clones the initial user found in the secret. Then it alternates between the two users with each rotation, and updates the password for the user becoming active. For more information about this rotation strategy, see Rotating AWS Secrets Manager Secrets by Alternating Between Two Existing Users.
Considerations
This provides "high availability" because the old version of the secret continues to operate and handle service requests while the Secrets Manager prepares and tests the new version. Secrets Manager deprecates the old version after the clients switch to the new version. There's no downtime while changing between versions.
This option requires the Lambda function to clone the permissions of the original user and apply them to the new user. The function then alternates between the two users with each rotation.
If you change the permissions granted to the users, ensure that you change permissions for both users.
You can customize the function: You can tailor the Lambda rotation function that's provided by Secrets Manager to meet your organization's requirements. For example, you could extend the testSecret phase of the function to test the new version with application-specific checks to ensure that the new secret works as expected. For instructions, see Customizing the Lambda Rotation Function Provided by Secrets Manager. | https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets-redshift.html | 2019-12-05T20:03:02 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.aws.amazon.com |
A Career Competency Portfolio.
Careers
Documents tagged with Careers
Activity 7: Overcoming Obstacles to Complete Daily Chores and Responsibilities.
Activity 9: Setting priorities: How important is school?
Activity 8: Understanding Choice Making.
Case Study on representatives from a range of local organisations from the Limestone Coast Region in South Australia participating in the trial of the Blueprint, as part of a...
Case Study on Queensland Rail using the Blueprint to map competencies and to develop a new Apprentice Career Development Program (ACDP).
Student Workbook. Area A: Personal Management. Career Competency 1: Build and maintain a positive self concept. Phase 3: Develop abilities to maintain a positive self concept...
Student Workbook. Area A: Personal Management. Career Competency 2: Interact positively and effectively with others. Phase 3: Develop abilities for building positive...
Student Workbook. Area A: Personal Management. Career Competency 3: Change and grow throughout life. Phase 3: Learn to respond to change and growth.
Student Workbook on recognising stress triggers and coping strategies for dealing with stress.
Student Workbook on stress management methods.
Worksheet for identifying and discussing perceptions of self, including personal attributes.
Worksheet for identifying and presenting positive perceptions of self.
Worksheet for identifying personal skills.
Worksheet for identifying factors that influence career options.
Worksheet for applying positive self-talk as a means of developing self-confidence.
Worksheet for creating a profile of strengths and abilities.
Student Workbook. Area B: Learning and Work Exploration. Career Competency 4: Participate in life-long learning. Phase 3: Link life-long learning to the career building process...
Student Workbook. Area B: Learning and Work Exploration. Career Competency 5: Locate and effectively use career information. Phase 3: Locate, interpret, evaluate and use career...
Student Workbook. Area B: Learning and Work Exploration. Career Competency 6: Understand the relationship between work, society and the economy. Phase 3: Understand how...
Pages
| https://docs.education.gov.au/taxonomy/term/4715 | 2019-12-05T19:40:34 | CC-MAIN-2019-51 | 1575540482038.36 | [array(['https://docs.education.gov.au/misc/feed.png',
'Subscribe to Careers'], dtype=object) ] | docs.education.gov.au |
Requirements
Multi Currency support is available in the Recurly Professional, Elite, and Enterprise plans.
In order for Recurly to be able to process transactions in a specific currency, your gateway must also be able to process payments in that currency. The list below attempts to reflect the complete list of currencies supported by Recurly and the gateways with which we are integrated. Keep in mind that Recurly's integration with a specific gateway may not support all currencies listed below regardless of that gateway's support for that currency. To be certain, look at your site to determine which currencies are available to you.
- Argentine Peso (ARS)
- Australian Dollars (AUD)
- Brazilian Real (BRL)
- British Pounds (GBP)
- Canadian Dollars (CAD)
- Chilean Peso (CLP)*
- Chinese Yuan (CNY)
- Colombian Peso (COP)
- Czech Korunas (CZK)
- Danish Kroner (DKK)
- Euros (EUR)
- Hong Kong Dollars (HKD)
- Hungarian Forints (HUF)
- Icelandic Krona (ISK)*
- Indian Rupee (INR)
- Israeli New Sheqel (ILS)
- Japanese Yen (JPY)*
- Mexican Peso (MXN)
- Norwegian Krones (NOK)
- New Zealand Dollars (NZD)
- Polish Złoty (PLN)
- Russian Ruble (RUB)
- Singapore Dollars (SGD)
- South African Rand (ZAR)
- South Korea Won (KRW)*
- Swedish Kronas (SEK)
- Swiss Francs (CHF)
- Thai Baht (THB)
- United States Dollars (USD)
- Venezuelan Bolívar (VEF)
*Currencies marked with an asterisk are zero-decimal currencies.
Note: Do you need a currency that's not listed above? If the currency you need is supported by your gateway, please contact Recurly Support to request support in Recurly for the currency. accepts one currency per gateway account. To accept more than one currency with this gateway, you will need to open multiple accounts with the payment gateway. Please contact your payment gateway to enquire about accepting multiple currencies.
- Braintree supports multiple currencies. Each currency requires a separate merchant account ID.
- CyberSource supports multiple currencies with a single account. Please contact CyberSource to enable additional currencies.
- Vantiv.
- Stripe supports multiple currencies with a single Stripe account, however, you will need to add an instance of the gateway (same credentials) for each currency you are approved to accept.
- Wirecard supports multiple currencies with a single account. Please contact Wirecard to enable additional currencies. | https://docs.recurly.com/docs/currencies | 2019-12-05T20:53:45 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.recurly.com |
chainer.dataset.converter¶
chainer.dataset.
converter()[source]¶
Decorator to make a converter.
This decorator turns a converter function into a
chainer.dataset.Converterclass instance, which also is a callable. This is required to use the converter function from an old module that does not support
chainer.backend.Deviceinstances (See the Device argument conversion section below).
Requirements of the target function
The target converter function must accept two positional arguments: a batch and a device, and return a converted batch.
The type of the device argument is
chainer.backend.Device.
The types and values of the batches (the first argument and the return value) are not specified: they depend on how the converter is used (e.g. by updaters).
Example
>>> @chainer.dataset.converter() ... def custom_converter(batch, device): ... assert isinstance(device, chainer.backend.Device) ... # do something with batch... ... return device.send(batch)
Device argument conversion
For backward compatibility, the decorator wraps the function so that if the converter is called with the device argument with
inttype, it is converted to a
chainer.backend.Deviceinstance before calling the original function. The
intvalue indicates the CUDA device of the cupy backend.
Without the decorator, the converter cannot support ChainerX devices. If the batch were requested to be converted to ChainerX with such converters,
RuntimeErrorwill be raised. | https://docs.chainer.org/en/stable/reference/generated/chainer.dataset.converter.html | 2019-12-05T20:20:21 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.chainer.org |
Introduction
Grenadine has made answering a call for submissions easier than ever. We have streamlined the process to allow you to submit with no added stress or frustration. Simply follow the steps below
Navigation Path
Process Overview
After selecting the following screen will appear. Prompting you to fill out the submissions.
An example of a submission form is shown below:
Once your submission has been successfully submitted.
In some cases, you will have to pay to submit. When this is the case select the button shown highlighted above.
You will now have the option to view your cart, where you can edit the contents or proceed to checkout.
When you select Proceed to checkout you will be directed to the screen shown above. After you have filled out this information you will be shown the confirmation message (below) and receive an e-mail confirming your purchase.
| https://docs.grenadine.co/attendee-sumbit.html | 2019-12-05T20:39:11 | CC-MAIN-2019-51 | 1575540482038.36 | [array(['images/start_a_sub.jpg', None], dtype=object)
array(['images/speaker_submisions.jpg', None], dtype=object)
array(['images/images/pay_to_submit.jpg', None], dtype=object)
array(['images/view_cart_or_check_out.jpg', None], dtype=object)
array(['images/purchaser.jpg', None], dtype=object)
array(['images/confirmation.jpg', None], dtype=object)] | docs.grenadine.co |
You can use the NiFi ZooKeeper Migrator
The NiFi ZooKeeper Migrator is part of the NiFi Toolkit and is downloaded separately from the Apache NiFi download page. | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1.1/bk_administration/content/zookeeper_migrator.html | 2017-10-17T04:12:42 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.hortonworks.com |
What is a Wapuu ? This cute character is the official mascot of WordPress designed by Kazuko Kaneuchi in 2011.
Since a lot of community and plugins has its own Wapuu, we thought it would be normal to have our own one.
We’d like to introduce our little Jack, our Wapuu :
Jack is the postman of your newsletters. He likes to deliver them (and sometimes that little gluttonous likes to eat them).
Jack carries its big green bag, holding all your emails and tries to deliver them in mailbox. He’s proud of his green hat.
We hope you like this little guy as much as we are.
If you want a sticker of Jack, we will be very pleased to give you somes at our next shows :
– Salon de l’E-marketing April 18-20 in Paris ( Porte de Versailles )
– WordCamp Europe June 15-17 in Paris ( Les Docks de Paris )
Long live Jack ! | https://docs.jackmail.com/blog/announce/introducing-our-wapuu-jack/ | 2017-10-17T03:49:59 | CC-MAIN-2017-43 | 1508187820700.4 | [array(['https://i2.wp.com/docs.jackmail.com/wp-content/uploads/2017/04/news_wapuu.png?fit=288%2C119&ssl=1',
'news_wapuu'], dtype=object)
array(['https://i0.wp.com/docs.jackmail.com/wp-content/uploads/2017/04/mascotte_round.png?resize=475%2C475&ssl=1',
None], dtype=object) ] | docs.jackmail.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
A document that contains additional information about the authorization status of a request from an encoded message that is returned in response to an AWS request.
Namespace: Amazon.SecurityToken.Model
Assembly: AWSSDK.SecurityToken.dll
Version: 3.x.y.z
The DecodeAuthorizationMessageResponse type exposes the following members
var response = client.DecodeAuthorizationMessage(new DecodeAuthorizationMessageRequest { EncodedMessage = "
" }); string decodedMessage = response.DecodedMessage;
| http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SecurityToken/TSecurityTokenDecodeAuthorizationMessageResponse.html | 2017-10-17T04:19:37 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.aws.amazon.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Exception: Aws::ElasticBeanstalk::Errors::SourceBundleDeletionException
- Inherits:
- ServiceError
- Object
- RuntimeError
- Aws::Errors::ServiceError
- ServiceError
- Aws::ElasticBeanstalk::Errors::SourceBundleDeletionException
- Defined in:
- (unknown)
Instance Attribute Summary
Attributes inherited from Aws::Errors::ServiceError
Method Summary
Methods inherited from Aws::Errors::ServiceError
Constructor Details
This class inherits a constructor from Aws::Errors::ServiceError | http://docs.aws.amazon.com/sdkforruby/api/Aws/ElasticBeanstalk/Errors/SourceBundleDeletionException.html | 2017-10-17T04:19:27 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.aws.amazon.com |
Example: Choosing Compression Encodings for the CUSTOMER Table
The following statement creates a CUSTOMER table that has columns with various data types. This CREATE TABLE statement shows one of many possible combinations of compression encodings for these columns.
Copy
create table customer( custkey int encode delta, custname varchar(30) encode raw, gender varchar(7) encode text255, address varchar(200) encode text255, city varchar(30) encode text255, state char(2) encode raw, zipcode char(5) encode bytedict, start_date date encode delta32k);
The following table shows the column encodings that were chosen for the CUSTOMER table and gives an explanation for the choices: | http://docs.aws.amazon.com/redshift/latest/dg/Examples__compression_encodings_in_CREATE_TABLE_statements.html | 2017-10-17T04:14:29 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.aws.amazon.com |
This topic describes how to write a very small Universal Windows driver using Kernel-Mode Driver Framework (KMDF).
To get started, be sure you have Microsoft Visual Studio 2015 and the Windows Driver Kit (WDK) 10 installed.
Debugging Tools for Windows is included when you install the WDK.
Create and build a driver package
- Open Microsoft Visual Studio. On the File menu, choose New > Project.
- In the New Project dialog box, select WDF.
- In the middle pane, select Kernel Mode Driver, Empty (KMDF).
In the Name field, enter "KmdfHelloWorld" for the project name.
Note *When you create a new KMDF or UMDF driver, you must select a driver name that has 32 characters or less. This length limit is defined in wdfglobals.h.
In the Location field, enter the directory where you want to create the new project.
Check Create directory for solution. Click OK.
Visual Studio creates one project and a solution. You can see them in the Solution Explorer window, shown here. (If the Solution Explorer window is not visible, choose Solution Explorer from the View menu.) The solution has a driver project named KmdfHelloWorld.
In the Solution Explorer window, right-click KmdfHelloWorld, and choose Properties. Navigate to Configuration Properties > Driver Settings > General, and note that Target Platform defaults to Universal.
In the Solution Explorer window, right-click KmdfHelloWorld and choose Add > New Item.
In the Add New Item dialog box, select C++ File. For Name, enter "Driver.c".
Note The file name extension is .c, not .cpp.
Click **Add**. The Driver.c file is added under Source Files, as shown here. Open Driver.c, and enter this code:
#include <ntddk.h> #include <wdf.h> DRIVER_INITIALIZE DriverEntry; EVT_WDF_DRIVER_DEVICE_ADD KmdfHelloWorldEvtDeviceAdd; NTSTATUS DriverEntry(_In_ PDRIVER_OBJECT DriverObject, _In_ PUNICODE_STRING RegistryPath) { NTSTATUS status; WDF_DRIVER_CONFIG config; KdPrintEx(( DPFLTR_IHVDRIVER_ID, DPFLTR_INFO_LEVEL, "KmdfHelloWorld: DriverEntry\n" )); WDF_DRIVER_CONFIG_INIT(&config, KmdfHelloWorldEvtDeviceAdd); status = WdfDriverCreate(DriverObject, RegistryPath, WDF_NO_OBJECT_ATTRIBUTES, &config, WDF_NO_HANDLE); return status; } NTSTATUS KmdfHelloWorldEvtDeviceAdd(_In_ WDFDRIVER Driver, _Inout_ PWDFDEVICE_INIT DeviceInit) { NTSTATUS status; WDFDEVICE hDevice; UNREFERENCED_PARAMETER(Driver); KdPrintEx(( DPFLTR_IHVDRIVER_ID, DPFLTR_INFO_LEVEL, "KmdfHelloWorld: KmdfHelloWorldEvtDeviceAdd\n" )); status = WdfDeviceCreate(&DeviceInit, WDF_NO_OBJECT_ATTRIBUTES, &hDevice); return status; }
Save Driver.c.
In the Solution Explorer window, right-click Solution 'KmdfHelloWorld' (1 project) and choose Configuration Manager. Choose a configuration and platform for both the driver project and the package project. For this exercise, we choose Debug and x64.
In the Solution Explorer window, right-click KmdfHelloWorld and choose Properties. In Wpp Tracing > All Options, set Run Wpp tracing to No..
To see the built driver, in File Explorer, go to your KmdfHelloWorld folder, and then to C:\KmdfHelloWorld\x64\Debug. The folder includes:
- KmdfHelloWorld.sys -- the kernel-mode driver file
- KmdfHelloWorld.inf -- an information file that Windows uses when you install the driver
- KmdfHelloWorld.cat -- a catalog file that the installer uses to verify the test signature for the driver package
Deploy and install can deploy, install, load, and debug your driver:
- On the host computer, open your solution in Visual Studio. You can double-click the solution file, KmdfHelloWorld.sln, in your KmdfHelloWorld folder.
- In the Solution Explorer window, right-click the KmdfHelloWorld project, and choose Properties.
- In the KmdfHelloWorld Property Pages window, go to Configuration Properties > Driver Install > Deployment, as shown here.
- Root\KmdfHelloWorld. Click OK.
Note In this exercise, the hardware ID does not identify a real piece of hardware. It identifies an imaginary device that will be given a place in the device tree as a child of the root node. For real hardware, do not select Hardware ID Driver Update; instead, select Install and Verify. You'll see the hardware ID in your driver's information (INF) file. In the Solution Explorer window, go to KmdfHelloWorld > Driver Files, and double-click KmdfHelloWorld.inf. The hardware ID is located under [Standard.NT$ARCH$].
[Standard.NT$ARCH$] %KmdfHelloWorld.DeviceDesc%=KmdfHelloWorld_Device, Root\KmdfHelloWorld
On the Debug menu, choose Start Debugging, or press F5 on the keyboard.
Visual Studio first shows progress in the Output window. Then it opens the Debugger Immediate window and continues to show progress.
Wait until your driver has been deployed, installed, and loaded on the target computer. This might take a minute or two.
On the Debug menu, choose Break All. The debugger on the host computer will break into the target computer. In the Debugger Immediate window, you can see the kernel debugging command prompt: kd>.
At this point, you can experiment with the debugger by entering commands at the kd> prompt. For example, you could try these commands:
To let the target computer run again, choose Continue from the Debug menu.
- To stop the debugging session, choose Stop Debugging from the Debug menu.
Related topics
Developing, Testing, and Deploying Drivers
Debugging Tools for Windows
Send comments about this topic to Microsoft | https://docs.microsoft.com/en-us/windows-hardware/drivers/gettingstarted/writing-a-very-small-kmdf--driver | 2017-10-17T04:03:31 | CC-MAIN-2017-43 | 1508187820700.4 | [array(['images/firstdriverkmdfsmall03.png',
'screen shot of the solution explorer window, showing the driver.c file added to the driver project'],
dtype=object) ] | docs.microsoft.com |
Rsyslog container¶
Rsyslog can be used to stream your applications logs (watchdog). It's similar to using syslog, however there's no syslog in PHP container (one process per container). Rsyslog will stream all incoming logs to a container output.
Here how you can use it with Monolog:
- Install monolog module. Make sure all dependencies being downloaded
- Add new handler at
monolog/monolog.services.yml:
monolog.handler.rsyslog: class: Monolog\Handler\SyslogUdpHandler arguments: ['rsyslog']
- Rebuild cache (
drush cr)
- Use
rsysloghandler for your channels
- Find your logs in rsyslog container output
Read Logging in Drupal 8 to learn more. | http://docker4drupal.readthedocs.io/en/latest/containers/rsyslog/ | 2017-10-17T03:52:21 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docker4drupal.readthedocs.io |
Ruby Coding Guidelines¶
Strongly based on with some local changes.
Formatting¶
- Use UTF-8 encoding in your source files.
- Use 2 space indent, no tabs.
- Use Unix-style line endings, including on the last line of the file.
- Use spaces around operators, after commas, colons and semicolons, around { and before }.
- No spaces after (, [ and before ], ).
- Prefer postfix modifiers (if, unless, rescue) when possible.
- Indent when as deep as case then indent the contents one step more.
- Use an empty line before the return value of a method (unless it only has one line), and an empty line between defs.
- Use Yard and its conventions for API documentation. Don’t put an empty line between the comment block and the definition.
- Use empty lines to break up a long method into logical paragraphs.
- Keep lines shorter than 80 characters.
- Avoid trailing whitespace.
Syntax¶
- Use def with parentheses when there are arguments.
- Conversely, avoid parentheses when there are none.
- Never use for, unless you exactly know why. Prefer each or loop.
- Never use then, a newline is sufficient.
- Prefer words to symbols.
- and and or in place of && and ||
- not in place of !
- Avoid ?:, use if (remember: if returns a value, use it).
- Avoid if not, use unless.
- Suppress superfluous parentheses when calling methods, unless the method has side-effects.
- Prefer do...end over {...} for multi-line blocks.
- Prefer {...} over do...end for single-line blocks.
- Avoid chaining function calls over multiple lines (implying, use {...} for chained functions.
- Avoid return where not required.
- Avoid line continuation (\) where not required.
- Using the return value of = is okay.
- if v = array.grep(/foo/)
- Use ||= freely for memoization.
- When using regexps, freely use =~, -9, :math:`~, ` and $` when needed.
- Prefer symbols (:name) to strings where applicable.
Naming¶
- Use snake_case for methods.
- Use CamelCase for classes and modules. (Keep acronyms like HTTP, RFC and XML uppercase.)
- Use SCREAMING_CASE for other constants.
- Use one-letter variables for short block/method parameters, according to this scheme:
- a,b,c: any object
- d: directory names
- e: elements of an Enumerable or a rescued Exception
- f: files and file names
- i,j: indexes or integers
- k: the key part of a hash entry
- m: methods
- o: any object
- r: return values of short methods
- s: strings
- v: any value
- v: the value part of a hash entry
- And in general, the first letter of the class name if all objects are of that type (e.g.
nodes.each { |n| n.name })
- Use _ for unused variables.
- When defining binary operators, name the argument other.
- Use def self.method to define singleton methods.
Code design¶
- Avoid needless meta-programming.
- Avoid long methods. Much prefer to go too far the wrong way and have multiple one-line methods.
- Avoid long parameter lists, consider using a hash with documented defaults instead.
- Prefer functional methods over procedural ones (common methods below):
- each - Apply block to each element
- map - Apply block to each element and remember the returned values.
- select - Find all matching elements
- detect - Find first matching element
- inject - Equivalent to foldl from Haskell
- Use the mutating version of functional methods (e.g. map!) where applicable, rather than using temporary variables.
- Avoid non-obvious function overloading (e.g. don’t use [“0”] * 8 to initialize an array).
- Prefer objects to vanilla arrays/hashes, this allows you to document the structure and interface.
- Protect the internal data stores from external access. Write API functions explicitly.
- Use attr_accessor to create getters/setters for simple access.
- Prefer to add a to_s function to an object for ease of debugging.
- Internally, use standard libraries where applicable (See the docs for the various APIs).:
- Hash, Array and Set
- String
- Fixnum and Integer
- Thread and Mutex
- Fiber
- Complex
- Float
- Dir and File
- Random
- Time
- Prefer string interpolation “blah#{expr}” rather than appending to strings.
- Prefer using the %w{} family of array generators to typing out arrays of strings manually.
General¶
- Write ruby -w safe code.
- Avoid alias, use alias_method if you absolutely must alias something (for Monkey Patching).
- Use OptionParser for parsing command line options.
- Target Ruby 2.0 (except where libraries are not compatible, such as Chef).
- Do not mutate arguments unless that is the purpose of the method.
- Do not mess around in core classes when writing libraries.
- Do not program defensively.
- Keep the code simple.
- Be consistent.
- Use common sense. | http://docs.projectclearwater.org/en/stable/Clearwater_Ruby_Coding_Guidelines.html | 2017-10-17T03:45:20 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.projectclearwater.org |
Difference between revisions of "System Tests Working Group"
From Joomla! Documentation
Revision as of 06:31, 28 July 2014
Contents
Team Members
- Puneet Kala (Working Group Coordinator)
- Javier Gómez (PLT liaison: javier.gomez at commmunity.joomla.org)
- Kshitij Sharma
- Tanaporn Pantuprecharat
- Mark Dexter
Roadmap
The next steps in this team are:
GSoC 2014
Sauce Labs integration
- Test SauceLabs and Travis integration with the repo at GitHub:. It will require the use of Paratest, to run several tests at the same time, otherwise it will take too long (more than 8 hours just the /Adminitrator/ tests)
Improvements in current tests
- (done) Create an Assert in all test that makes sure there is not any warning, even if the test passes. See this example:
- (done) Create a graphical test coverage report
- Move to Codeception framework. Some tests are being done here:
- Create a Best Practice in System testing Joomla! extensions with component
Travis
We have Travis complaining about this:
E.................E......... I'm sorry but your test run exceeded 50.0 minutes. One possible solution is to split up your test run.
We need to find a solution, for example:
Documents
- Writing System Tests for Joomla!
- GSOC-Webdriver_system_tests_for_CMS repository: | https://docs.joomla.org/index.php?title=System_Tests_Working_Group&curid=34076&diff=123979&oldid=123936 | 2015-08-28T06:26:54 | CC-MAIN-2015-35 | 1440644060413.1 | [] | docs.joomla.org |
component of Type
type in the GameObject or any of its children using depth first search.
A component is returned only if it is found on an active GameObject.
// Disable the spring of the first HingeJoint component // found on any child object.
var hinge : HingeJoint; hinge = gameObject.GetComponentInChildren(HingeJoint); hinge.useSpring = false;
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public HingeJoint hinge; void Example() { hinge = gameObject.GetComponentInChildren<HingeJoint>(); hinge.useSpring = false; } }
Generic version. See the Generic Functions page for more details. | http://docs.unity3d.com/ScriptReference/GameObject.GetComponentInChildren.html | 2015-08-28T05:07:00 | CC-MAIN-2015-35 | 1440644060413.1 | [] | docs.unity3d.com |
The<<
This guide to the Console will go over:
- How to install the Console on your Omega
- How to access the Console from your Browser
- Managing your WiFi network connections
- Adjusting the Omega’s AP WiFi network
- Updating your Omega
- Installing additional Console apps
- Developing on the Console
Let’s get started! | https://docs.onion.io/omega2-docs/the-console.html | 2017-10-16T23:45:11 | CC-MAIN-2017-43 | 1508187820487.5 | [array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Get-Started/img/console-home-page.png',
'home-page'], dtype=object) ] | docs.onion.io |
Manage Users for an Amazon Chime Team Account
After you create an Amazon Chime team account, you can invite and remove users. With a team account, you can use the Amazon Chime console to invite users from any email domain.
Alternatively, if the users are from your organization's domain, you can convert your team account to an enterprise account. With an enterprise account, you can send an invitation to the users in your organization and provide them with instructions to create their own Amazon Chime accounts. For more information, see Manage Users for an Amazon Chime Enterprise Account.
Contents
Invite Users
Use the following procedure to invite users to join a team account.
To invite users to a team account
Open the Amazon Chime console at.
On the Accounts page, select the name of the team account.
On the Users page, choose Invite users.
Type the email addresses of the users to invite and then choose Invite users.
Remove Users
Use the following procedure to remove users from a team account. This disassociates the from the account and removes any license that you purchased for them. The user can still access Amazon Chime, but is no longer a paid member of your Amazon Chime account.
To remove users from a team account
Open the Amazon Chime console at.
On the Accounts page, select the name of the team account.
On the Users page, select the users to remove and choose User actions, Remove user. | http://docs.aws.amazon.com/chime/latest/ag/manage-users-team-account.html | 2017-10-17T00:15:27 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the GetTemplate operation. Returns the template body for a specified stack. You can get the template for running or deleted stacks.
For deleted stacks, GetTemplate returns the template for up to 90 days after the stack has been deleted.
If the template does not exist, a
ValidationError is returned.
Namespace: Amazon.CloudFormation.Model
Assembly: AWSSDK.CloudFormation.dll
Version: 3.x.y.z
The Get | http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudFormation/TCloudFormationGetTemplateRequest.html | 2017-10-17T00:16:50 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.aws.amazon.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::Glue::Types::StartTriggerRequest
Overview
Note:
When passing StartTriggerRequest as input to an Aws::Client method, you can use a vanilla Hash:
{ name: "NameString", # required }
Instance Attribute Summary collapse
- #name ⇒ String
The name of the trigger to start.
Instance Attribute Details
#name ⇒ String
The name of the trigger to start. | http://docs.aws.amazon.com/sdkforruby/api/Aws/Glue/Types/StartTriggerRequest.html | 2017-10-17T00:16:15 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.aws.amazon.com |
java.lang.Object
net.i2p.util.Clocknet.i2p.util.Clock
public class Clock
Alternate location for determining the time which takes into account an offset. This offset will ideally be periodically updated so as to serve as the difference between the local computer's current time and the time as known by some reference (such as an NTP synchronized clock). Protected members are used in the subclass RouterClock, which has access to a router's transports (particularly peer clock skews) to second-guess the sanity of clock adjustments.
protected I2PAppContext _context
protected long _startedOn
protected boolean _statCreated
protected volatile long _offset
protected boolean _alreadyChanged
public static final long MAX_OFFSET
public static final long MAX_LIVE_OFFSET
public static final long MIN_OFFSET_CHANGE
public Clock(I2PAppContext context)
public static Clock getInstance()
public Timestamper getTimestamper()
protected Log getLog()
public void setOffset(long offsetMs)
public void setOffset(long offsetMs, boolean force)
public long getOffset()
public boolean getUpdatedSuccessfully()
public void setNow(long realTime)
public void setNow(long realTime, int stratum)
setNowin interface
Timestamper.UpdateListener
stratum- ignored
public long now()
public void addUpdateListener(Clock.ClockUpdateListener lsnr)
public void removeUpdateListener(Clock.ClockUpdateListener lsnr)
protected void fireOffsetChanged(long delta) | http://docs.i2p2.de/javadoc/net/i2p/util/Clock.html | 2017-10-16T23:59:15 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.i2p2.de |
First you will want to make sure that you have the latest version of whichever browser you are using. Please refer to the sections below:
1. Click on the “Tools” menu at the top of the webpage and select “Internet Options”.
2. On the General Tab, click on the button that says “Delete” in the Browsing History section.
3. Click on the “Delete Files” button and choose “Yes” when the confirmation window comes up.
4. Do the same for the “Delete Cookies” and the “Delete History” buttons. | http://docs.daz3d.com/doku.php/artzone/pub/faq/accounts/account_help/resetting_cookies_and_cache | 2017-10-17T00:21:32 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.daz3d.com |
mysql_install_db needs to invoke
mysqld with the
--bootstrap and
--skip-grant-tables options. If
MySQL was configured with the
DISABLE_GRANT_OPTIONS compiler
flag,
-.
If you have set a custom
TMPDIR variable
when performing the installation, and the specified directory
is not accessible, the execution of
mysql_install_db may fail. You should unset
TMPDIR, or set
TMPDIR to
point to the system temporary directory (usually
/tmp)..
For internal use. The directory under which mysql_install_db looks for support files such as the error message file and the file for populating the help tables.
The login user name to use for running mysqld. Files and directories created by mysqld will be owned by this user. You must be
rootto use this option. By default, mysqld runs using your current login name and files and directories that it creates will be owned by you.
Verbose mode. Print more information about what the program does.
For internal use. This option is used for creating Windows distributions. | http://doc.docs.sk/mysql-refman-5.5/mysql-install-db.html | 2017-10-16T23:48:25 | CC-MAIN-2017-43 | 1508187820487.5 | [] | doc.docs.sk |
Contents Now Platform User Interface Previous Topic Next Topic onShow script for List v2 context menus Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share onShow script for List v2 context menus The onShow script field, on the Context Menu form, defines a script. The script runs before the context menu is displayed, to determine which options appear in the context menu. The onShow script is typically used to change the menu items on the list header menu based on the current field column. The following JavaScript variables are available to the onShow script when it is executed:Table 1. onShow script variables Variable Description g_menu Context menu to be displayed. g_item Current context menu item.. An example of an onShow script is one that determines when to enable or disable the Ungroup option in a list column heading); } On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-platform-user-interface/page/administer/navigation-and-ui/reference/r_OnShowScript.html | 2019-09-15T10:31:45 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.servicenow.com |
Contents Performance Analytics and Reporting Previous Topic Next Topic Define a report drilldown in the Report Designer Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Define a report drilldown in the Report Designer You can define a report drilldown to allow reporting users to view subsets of the report data. When you define a report drilldown, it applies only to the report for which you define it. Before you beginThe report that you want to define a drilldown for must exist. Note: You can only drill down to data in the same table as the report. The following report types do not support the drilldown feature: list, histogram, calendar, control, box, and trendbox.How to create drilldowns and datasets in the report creation tool. Procedure Navigate to Reports > View / Run. Select the report you want to add a drilldown to. Click the Show report structure icon (). A badge on the Report structure icon displays the number of defined drilldowns. Click the Add drilldown icon (). Figure 1. Drilldown example Enter a Title for the drilldown and click Next. Select the chart Type to display the data and click Next. See Creating reports. The drilldown chart type can be different than the parent report. Configure the report. Configuration options depend on the selected Type. Click Save drilldown. ResultThe user can now drill down from the top-level report to the specified drilldown report visualizations. Note: All users can view report visualizations, such as pie charts and column reports. However, the last level of a drilldown is always a list. Platform access control lists determine user access to list information. Users who do not have rights to any part of the list data see the message "Number of rows removed from this list by Security constraints:" followed by the number. For more information, see Access control rules. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-performance-analytics-and-reporting/page/use/reporting/task/t_DefineAReportDrilldown.html | 2019-09-15T10:30:16 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.servicenow.com |
Customizing RadRibbonBar
You can control several aspects of the overall RadRibbonBar presentation by setting the corresponding properties:
- The Expanded property controls whether the base of the control (the area beneath the tabs) will be visible initially. Set this property to False to hide the base of the control.
Figure 1: Expanded RibbonBar
Figure 2: Collapsed RibbonBar
The Expanded property also indicates whether the Ribbon Bar is expanded or collapsed. For instance, the end-user can collapse the control by double-clicking on any of the tabs
- The ShowExpandButton property controls whether the expand/collapse button will shown in RadRibbonBar. Set this property to True to show the button.
- The ShowHelpButton property controls whether the Help button will shown in RadRibbonBar. Set this property to True to show the button.
- The StartButtonImage property specifies an image to use for the Start Button in the upper left corner of the control.
The size of the Start Button is determined by the size of the image set.
- The Text property determines the text which is displayed in the Ribbon Bar's caption. | https://docs.telerik.com/devtools/winforms/controls/ribbonbar/designing-radribbonbar/customizing-radribbonbar | 2019-09-15T10:14:08 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['images/ribbonbar-getting-started-customizing-radribbonbar001.png',
'ribbonbar-getting-started-customizing-radribbonbar 001'],
dtype=object)
array(['images/ribbonbar-getting-started-customizing-radribbonbar002.png',
'ribbonbar-getting-started-customizing-radribbonbar 002'],
dtype=object)
array(['images/ribbonbar-getting-started-customizing-radribbonbar003.png',
'ribbonbar-getting-started-customizing-radribbonbar 003'],
dtype=object)
array(['images/ribbonbar-getting-started-customizing-radribbonbar004.png',
'ribbonbar-getting-started-customizing-radribbonbar 004'],
dtype=object) ] | docs.telerik.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Manage the WorkSpace Running Mode
The running mode of a WorkSpaces determines its immediate availability and how you pay for it. You can choose between the following running modes when you create the WorkSpace:
AlwaysOn — Use when paying a fixed monthly fee for unlimited usage of your WorkSpaces. This mode is best for users who use their WorkSpace full time as their primary desktop..
For more information, see Amazon WorkSpaces Pricing.
Modify the Running Mode
You can switch between running modes at any time.
To modify the running mode of a WorkSpace
Open the Amazon WorkSpaces console at.
In the navigation pane, choose WorkSpaces.
Select the WorkSpaces to modify and choose Actions, Modify Running Mode Properties.
Select the new running mode, AlwaysOn or AutoStop, and then choose Modify.
Stop and Start an AutoStop WorkSpace
When your AutoStop WorkSpaces are not in use, they are automatically stopped after a specified period of inactivity, and hourly metering is suspended. To further optimize costs, you can suspend the hourly charges associated with AutoStop WorkSpaces. The WorkSpace is stopped and all apps and data saved for the next time a user logs in to the WorkSpace.
Note
Amazon WorkSpaces can detect inactivity only when users are using Amazon WorkSpaces clients. If users are using third-party clients, Amazon WorkSpaces might not be able to detect inactivity, and therefore the WorkSpace might not automatically stop and metering might not be suspended.
When a user reconnects to a stopped WorkSpace, it resumes from where it left off, typically in under 90 seconds.
You can restart AutoStop WorkSpaces that are available or in an error state.
To stop an AutoStop WorkSpace
Open the Amazon WorkSpaces console at.
In the navigation pane, choose WorkSpaces.
Select the WorkSpaces to be stopped and choose Actions, Stop WorkSpaces.
When prompted for confirmation, choose Stop.
To start an AutoStop WorkSpace
Open the Amazon WorkSpaces console at.
In the navigation pane, choose WorkSpaces.
Select the WorkSpaces to be started and choose Actions, Start WorkSpaces.
When prompted for confirmation, choose Start.
To remove the fixed infrastructure costs associated with AutoStop WorkSpaces, remove the WorkSpace from your account. For more information, see Delete a WorkSpace. | https://docs.aws.amazon.com/workspaces/latest/adminguide/running-mode.html | 2019-09-15T10:18:10 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.aws.amazon.com |
Streaming support for request processing
The Citrix Web App Firewall now uses request side streaming, which results in a significant performance boost. Instead of buffering the entire request before processing it, the Web App Firewall now looks at the incoming data, field by field, to inspect the input of each field for any configured security check violation (SQL, XSS, Field Consistency, Field Formats, etc.). As soon as the processing of the data for a field is completed, it is forwarded to the backend while the evaluation continues for the remaining fields. This significantly improves the processing time specially when handling large posts where the forms have large number of fields.
Note:
Citrix Web App Firewall supports a maximum post size of 20 MB without streaming. For better resource utilization, Citrix recommends you to enable streaming option for payloads greater than 20 MB. Also, the back-end server must accept the chunked requests when streaming is enabled.
Although the streaming process is transparent to the users, minor configuration adjustments are required due to the following changes:
RegEx Pattern Match: RegEx pattern match is now restricted to 4K for contiguous character string match.
Field Name Match: Web App Firewall learning engine can only distinguish the first 128 bytes of the name for learning. If a form has multiple fields with names that have identical string match for the first 128 bytes, the learning engine may not be able to distinguish between them. Similarly, the deployed relaxation rule might inadvertently relax all such fields.
Removing white spaces, percent decoding, unicode decoding, and charset conversion which is done during canonicalization is carried out prior to security check inspection. The 128 byte limit is applicable to the canonicalized representation of the field name in UTF-8 character format. The ASCII characters are 1 byte but the UTF-8 representation of the characters in some international languages may range from 1-4 bytes. If each character in the name takes 4 bytes when converted to UTF-8 format, only first 32 characters in the name may be distinguished by the learned rule for such a language.
Field Consistency Check: When the field Consistency check is enabled, all the forms in the session are now stored based on the “as_fid” tag inserted by the Web App Firewall without consideration for the “action_url.”
- Mandatory Form tagging for Form Field consistency: When the field consistency check is enabled, the form tag must be enabled also. The Field Consistency protection might not work if form tagging is turned off.
- Sessionless Form Field Consistency: The Web App Firewall no longer carries out the “GET” to “POST” conversion of forms when sessionless field consistency parameter is enabled. The form tag is required for sessionless field consistency also.
- Tampering of as_fid: If a form is submitted after tampering as_fid, it now triggers field consistency violation even if no other field was tampered. In non-streaming requests, this was allowed because the forms could be validated using the “action_url” stored in the session.
Signatures: The signatures now have the following specifications:
Location: It is now a mandatory requirement that location must be specified for each pattern. All patterns in the rule MUST have a
<Location>tag.
Fast Match: All signature rules must have a fast match pattern. If there is no fast match pattern, an attempt will be made to select one if possible. Fast match must be a literal string but some PCRE’s can be used for fast match if they contain a usable literal string.
Deprecated Locations: Following locations are no longer supported in signature rules.
- HTTP_ANY
- HTTP_RAW_COOKIE
- HTTP_RAW_HEADER
- HTTP_RAW_RESP_HEADER
- HTTP_RAW_SET_COOKIE
XSS/SQL Transform: Raw data is used for transformation because the SQL special characters ( single quote(‘), backslash (), and semicolon (;)), and XSS tags ((<) and(>)) are same in all languages and do not need canonicalization of data. All representations of these characters, such as HTML entity encoding, percent encoding, or ASCII are evaluated for transform operation.
The Web App Firewall no longer inspects both the attribute name and value for the XSS transform operation. Now only XSS attribute names are transformed when streaming is engaged.
Processing XSS Tags: As part of the streaming changes in 10.5.e build onwards, the Web App Firewall processing of the Cross-site Scripting tags has changed. In earlier releases, presence of either open bracket (<), or close bracket (>), or both open and close brackets (<>) was flagged as Cross-site Scripting Violation. The behavior has changed in 10.5.e build onwards. Presence of only the open bracket character (<), or only the close bracket character (>) is no longer considered as an attack. It is when an open bracket character (<) is followed by a close bracket character (>), the Cross-site scripting attack gets flagged. Both characters must be present in the right order (< followed by >) to trigger Cross-site scripting violation.
Note
Change in SQL violation log Message: As part of the streaming changes in 10.5.e Web App Firewall detects the SQL violation, the entire input string might be included in the log message, as shown below:
SQL Keyword check failed for field text="select a name from testbed1\;\(\;\)".*<blocked>
In 11.0,.
RAW POST Body: The security check inspections are always done on RAW POST body.
Form ID: The Web App Firewall inserted “as_fid” tag, which is a computed hash of the form, will no longer be unique for the user session. It will now have an identical value for a specific form irrespective of the user or the session.
Charset: If a request does not have a charset, the default charset specified in the application profile is used when processing the request.
Counters:
Counters with prefix “se_” and “appfwreq_” are added to track the streaming engine and the Web App Firewall streaming engine request counters respectively.
nsconsmg -d statswt0 -g se_err_
nsconsmg -d statswt0 -g se_tot_
nsconsmg -d statswt0 -g se_cur_
nsconsmg -d statswt0 -g appfwreq_err_
nsconsmg -d statswt0 -g appfwreq_tot_
nsconsmg -d statswt0 -g appfwreq_cur_
_err counters: indicate the rare event which should have succeeded but failed due to either memory allocation problem or some other resource crunch.
_tot counters: ever increasing counters.
_cur counters: counters indicating current values that keep changing based on usage from current transactions.
Tips:
- The Web App Firewall security checks should work exactly the same as before.
There is no set ordering for the processing of the security checks.
- The response side processing is not affected and remains unchanged.
- Streaming is not engaged if CVPN is used.
Important
Calculating the Cookie length: In release 10.5.e (in a few interim enhancement builds prior to 59.13xx.e build) as well as in the 11.0 release (in builds prior to 65.x), Web App Firewall processing of the Cookie header was changed. In those releases, every cookie is evaluated individually, and if the length of any one cookie received in the Cookie header exceeds the configured BufferOverflowMaxCookieLength, the Buffer Overflow violation is triggered. As a result of this change, requests that were blocked in 10.5 and earlier release builds might be allowed, because the length of the entire cookie header is not calculated for determining the cookie length. In some situations, the total cookie size forwarded to the server might be larger than the accepted value, and the server might respond with “400 Bad Request”.
Note that this change has been reverted. The behavior in the 10.5.e ->59.13xx.e and subsequent 10.5.e enhancement builds as well as in the 11.0 release 65.x and subsequent builds is now similar to that of the non-enhancement builds of release 10.5. The entire raw Cookie header is now considered when calculating the length of the cookie. Surrounding spaces and the semicolon (;) characters separating the name-value pairs are also included in determining the cookie length. | https://docs.citrix.com/en-us/citrix-adc/12-1/application-firewall/appendixes/streaming-support-for-request-processing.html | 2019-09-15T10:49:35 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.citrix.com |
Audio features
You can configure and add the following Citrix policy settings to a policy that optimizes HDX audio features. For usage details plus relationships and dependencies with other policy settings, see Audio policy settings and Bandwidth policy settings and Multi-stream connections policy settings.
Important
We recommend delivering audio using User Datagram Protocol (UDP) rather than TCP, but UDP audio encryption using DTLS is available only between Citrix Gateway and Citrix Workspace app. Therefore, sometimes it might be preferable to use TCP transport. TCP supports end-to-end TLS encryption from the Virtual Delivery Agent (VDA) to Citrix Workspace app.. The policy is set to Medium - optimized-for-speech when UDP transport (recommended) is used. The High Definition audio setting provides high fidelity stereo audio, but consumes more bandwidth than other quality settings. Do not use this audio quality for non-optimized voice chat or video chat applications (such as softphones). The reason being that it might introduce latency into the audio path that is not suitable for real-time communications. We recommend the optimized for speech policy setting for real-time audio, regardless of the selected transport protocol.
When the.
Client audio redirectionClient audio redirection
To allow users to receive audio from an application on a server through speakers or other sound devices on the user device, leave the Client audio redirection setting at Allowed. This is the default.
Client audio mapping puts extra load on the servers and the network. However, prohibiting client audio redirection disables all HDX audio functionality.
For setting details, see Audio policy settings. Remember to enable client audio settings on the user device.
Client microphone redirectionClient microphone redirection
To allow users to record audio using input devices such as microphones on the user device, leave the Client microphone redirection setting at its default (Allowed).
For security, user devices alert their users when servers they don’t trust try to access microphones. Users can choose to accept or reject access before using the microphone. Users can disable this alert on Citrix Workspace app.
For setting details, see Audio policy settings. Remember to enable Client audio settings on the user device.
Audio Plug N PlayAudio Plug N Play
The Audio Plug N Play policy setting allows or prevents the use of multiple audio devices to record and play sound. This setting is Enabled by default. Audio Plug N Play enables audio devices to be recognized. The devices are recognized even if they are not plugged in until after the user session has started..
Audio over UDP Real-time Transport and Audio UDP port rangeAudio over UDP Real-time Transport and Audio UDP port range
By default, Audio over User Datagram Protocol (UDP) Real-time Transport is allowed (when selected at the time of installation). It opens up a UDP port on the server for connections that use Audio over UDP Real-time Transport. If there is network congestion or packet loss, we recommend configuring UDP/RTP for audio to ensure the best possible user experience. For any real time audio such as softphone applications, UDP audio is preferred to EDT. UDP allows for packet loss without retransmission, ensuring that no latency is added on connections with high packet loss..
The Audio UDP port rang specifies the range of port numbers that the VDA uses to exchange audio packet data with the user device.
By default, the range is 16500 through 16509.
For setting details about Audio over UDP Real-time Transport, see Audio policy settings. For details about Audio UDP port range, see Multi-stream connections policy settings. Remember to enable Client audio settings on the user device.
Audio setting policies for user devicesAudio setting policies for user devices
- Load the group policy templates by following Configuring the Group Policy Object administrative template.
- In the Group Policy Editor, expand Administrative Templates > Citrix Components > Citrix Workspace > User Experience.
- For Client audio settings, select Not Configured, Enabled, or Disabled.
- Not Configured. By default, Audio Redirection is enabled using high quality audio or the previously configured custom audio settings.
- Enabled. Enables audio redirection using the selected options.
- Disabled. Disables audio redirection.
- If you select Enabled, choose a sound quality. For UDP audio, use Medium (default).
- For UDP audio only, select Enable Real-Time Transport and then set the range of incoming ports to open in the local Windows firewall.
- To use UDP Audio with Citrix Gateway, select Allow Real-Time Transport Through gateway. Configure Citrix Gateway with DTLS. For more information, see this article.
As an Administrator, if you do not have control on endpoint devices to make these changes, use the default.ica attributes from StoreFront to enable UDP Audio. For example, for bring your own devices or home computers.
- On the StoreFront machine, open C:\inetpub\wwwroot\Citrix\<Store Name>\App_Data\default.ica with an editor such as notepad.
Make the following entries under the [Application] section.
; This text enables Real-Time Transport
EnableRtpAudio=true
; This text allows Real-Time Transport Through gateway
EnableUDPThroughGateway=true
; This text sets any echo. The effectiveness of echo cancellation is sensitive to the distance between the speakers and the microphone. Ensure that the devices aren’t too close or too far away from each other.
You can change a registry setting to disable echo cancellation..
-.
Citrix Virtual Apps and Desktops support several alternatives for delivering softphones.
- Control mode. The hosted softphone controls a physical telephone set. In this mode, no audio traffic goes through the Citrix Virtual Apps and Desktops server.
- HDX RealTime optimized softphone support. The media engine runs on user device, and Voice over Internet Protocol Citrix Virtual Apps and Desktops feature that allows an application such as a softphone to run locally on the Windows user device yet appear seamlessly integrated with their virtual/published desktop. This feature offloads all audio processing to the user device. For more information, see Local App Access and URL redirection.
- HDX RealTime generic softphone support. Voice over Internet Protocol Workspace app.
Generic softphone support is a feature of HDX RealTime. This approach to softphone delivery is especially useful when:
- An optimized solution for delivering the softphone is not available and the user is not on a Windows device where Local App Access can be used.
- The media engine that is needed for optimized delivery of the softphone isn’t installed on the user device or isn’t available for the operating system version running on the user device. In this scenario, Generic HDX RealTime provides a valuable fallback solution.
There are two softphone delivery considerations using Citrix Virtual Apps and Desktops:
- How the softphone application is delivered to the virtual/published desktop.
- How the audio is delivered to and from the user headset, microphone, and speakers, or USB telephone set.
Citrix Virtual Apps and Desktops include numerous technologies to support generic softphone delivery:
- Optimized-for-Speech codec for fast encode of the real-time audio and bandwidth efficiency.
- Low latency audio stack.
- Server-side jitter buffer to smooth out the audio when the network latency fluctuates.
- Packet tagging (DSCP and WMM) for Quality of Service.
- DSCP tagging for RTP packets (Layer 3)
- WMM tagging for Wi-Fi
The Citrix Workspace app versions for Windows, Linux, Chrome, and Mac also are Voice over Internet Protocol capable. Citrix Workspace app for Windows offers these features:
- Client-side jitter buffer - Ensures smooth audio even when the-based routing over the network.
- ICA supports four TCP and two UDP streams. One of the UDP streams supports the real-time audio over RTP.
For a summary of Citrix Workspace app capabilities, see Citrix Receiver Feature Matrix.
System configuration recommendations
Client Hardware and Software: For optimal audio quality, we recommend the latest version of Citrix Workspace app and a good quality headset that has acoustic echo cancellation (AEC). Citrix Workspace app versions for Windows, Linux, and Mac support Voice over Internet Protocol. Also, Dell Wyse offers Voice over Internet Protocol Citrix Virtual Desktops many broadcast packets. If IPv6 support is not needed, you can disable IPv6 on those devices. Configure to support Quality of Service.
Settings for use WAN connections: You can use voice chat over LAN and. Doing so maintains a high Quality of Service. NetScaler SD-WAN supports Multi-Stream ICA, including UDP. Also, for a single TCP stream, it’s possible to distinguish the priorities of various ICA virtual channels to ensure that high priority real-time audio data receives preferential treatment.
Use Director or the HDX Monitor to validate your HDX configuration.
Remote user connections: Citrix Gateway supports DTLS to deliver UDP/RTP traffic natively (without encapsulation in TCP). Open firewalls bidirectionally for UDP traffic over Port 443.
Codec selection and bandwidth consumption: Between the user device and the VDA in the data center, we recommend using the Optimized-for-Speech codec setting, also known as Medium Quality audio. Between the VDA platform and the IP-PBX, the softphone uses whatever codec is configured or negotiated. For example:
- G711 provides good voice quality but has a bandwidth requirement of from 80 kilobits per second through 100 kilobits per second per call (depending on Network Layer2 overheads).
- G729 provides good voice quality and has a low bandwidth requirement of from 30 kilobits per second through was. Supports audio devices having buttons or a display (or both), human interface device (HID), if the user device is on a LAN or LAN-like connection back to the Citrix Virtual Apps and Desktops server.
Citrix audio virtual channel
The bidirectional Citrix Audio Virtual Channel (CTXCAM) enables audio to be delivered efficiently over the network. Generic HDX RealTime takes the audio from the user headset or microphone and compresses it. Then, Internet Protocol. stereo. This reason being that the USB protocol tends to be sensitive to network latency and requires considerable network bandwidth. Isochronous USB redirection works well when using some softphones. This redirection provides excellent voice quality and low latency. However, Citrix Audio Virtual Channel is preferred because it is optimized for audio traffic. The primary exception is when you’re using an audio device with buttons. For example, a USB telephone attached to the user device that is LAN-connected to the data center. In this case, Generic USB Redirection supports buttons on the phone set or headset that control features by sending a signal back to the softphone. There isn’t an issue with buttons that work locally on the device.
LimitationLimitation.
You install an audio device on your client, enable the audio redirection, and start an RDS session. The audio files might fail to play and an error message appears.
As a workaround, add this registry key on the RDS machine, and then restart the machine:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SCMConfig
Name: EnableSvchostMitigationPolicy
Type: REG_DWORD
Data: 0
- Limitation | https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/multimedia/audio.html | 2019-09-15T10:43:05 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.citrix.com |
Use ADO to Return a List of Users Connected to a Database
This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release.
Use ADO to Return a List of Users Connected to a Database
by Susan Sales Harkins
Application: Access 2000
Operating System: Windows
Any database administrator will tell you that a networked Access database can be a problem to maintain. To make changes, the administrator must open the database exclusively, which means no one else can have the database open. Either everyone must stop his work and close the database while the administrator tends to maintenance tasks, or the administrator must schedule chores during company off-hours.
As if that weren't enough to contend with, consider this one last annoying situation: You're the administrator and you've scheduled downtime or you're working on a Saturday while everyone else is enjoying the day off. You try to open the database exclusively only to learn that someone already has it open. Obviously, someone went home and left his copy open and running.
With earlier versions of Access, there isn't an easy way to find the system that's still running the database. However, Access 2000 offers the administrator a simple Access solution for this situation--using ADO's schema recordsets. You're probably familiar with recordsets; they contain data from your tables. Schema recordsets, however, contain information about the database itself.
How to open a schema recordset
To open a schema recordset you'll use the Connection object's OpenSchema method in the form
connection.OpenSchema(querytype, _ criteria, schemaID)
where connection identifies the Connection object and querytype is an intrinsic constant that tells ADO what kind of information you want. Criteria is an optional argument that filters the resulting recordset. The last parameter, schemaID, is a GUID that identifies a specific schema. This parameter is necessary only when querytype equals adSchemaProviderSpecific. For the Microsoft Jet OLE DB Provider, this constant returns four different schema recordsets:
- A recordset of current users of the database (this is the one we'll be working with in this article)
- A recordset of partial replica filters
- A recordset of replica conflict tables
- A recordset of ISAM statistics
For your convenience, Table A lists the global constants and GUIDs that apply to the Jet provider.
Table A: Jet OLE DB provider-specific constants
*****This is written as it appears in Microsoft's documentation. Feel free to correct when you declare constants in a procedure.
Returning the current users
Now we're ready to tackle the actual problem--an ADO procedure that will identify the current users of a database. For that purpose, we'll create a procedure that returns the current users of the Northwind sample database that comes with Access. If you try our procedure, be sure you use your system's correct path to Northwind.mdb--yours may not be the same as ours. First, launch Northwind, open a blank module, and enter the global constant
Global Const JET_SCHEMA_USERROSTER = _ "{947bb102-5d43-11d1-bdbf-00c04fb92675}"
in the module's General Declarations section. You could, if you like, replace the JET_SCHEMA_USERROSTER argument in our code with the actual value "{947bb102-5d43-11d1-bdbf-00c04fb92675}" and omit this statement. However, using the constant will make your code more readable. Later, you may have trouble remembering just what that long string of values means. Next, enter the procedure shown in Listing A.
Listing A: ReturnUserRoster() function
Sub ReturnUserRoster() Dim cnn As New ADODB.Connection Dim rst As ADODB.Recordset cnn.Open "Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Data Source=C:\Program Files\Microsoft " & _ "Office\Office\Samples\Northwind.mdb;" Set rst = cnn.OpenSchema(adSchemaProviderSpecific _ , , JET_SCHEMA_USERROSTER) Debug.Print rst.GetString Set rst = Nothing Set cnn = Nothing End Sub
Let's examine what this procedure does. After declaring the Connection and the Recordset objects, the Open method creates a connection to the Northwind.mdb database and then sets the rst object using the adSchemaProviderSpecific and JET_SCHEMA_USERROSTER arguments we discussed in the previous section. The resulting recordset will consist of one record for each current user in the database. The GetString method returns the recordset as a string and the procedure uses this method to print the recordset in the Immediate window. You could also send the results to a file, display it in a message box, or store the results in a table (which would take a bit more work than we've shown).
This particular schema recordset contains the following information:
- COMPUTER_NAME. Identifies the workstation as specified in the system's Network control panel.
- LOGIN_NAME. Specifies the name the user entered to log into the database, if it's secured. If it isn't secured, this field returns Admin.
- CONNECTED. Returns True (-1) if there's a corresponding user lock in the LDB file.
- SUSPECTED_STATE. Returns True (-1) if the user has left the database in a suspect state. Otherwise, this value is Null.
If you'd like to see the results, press [Ctrl]G to display the Immediate window. Then, type ReturnUserRoster and press [Enter] to run the procedure. Figure A shows the results of the procedure on our system.
Figure A: Our ReturnUserRoster lists all of the users currently accessing the Northwind database.
.gif)
Note that there appears to be a duplicate entry, KNIGHTRIDER. This is the name of the computer that we're running the procedure from. Since we ran the procedure from within the Northwind database, we actually have two connections to it--one from opening the database directly through Access and one created by the following statement in our procedure:
cnn.Open "Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Data Source=C:\Program Files\Microsoft " & _ "Office\Office\Samples\Northwind.mdb;"
Using the information shown in Figure A, we learn that there are three computers with a connection to the database, the database isn't secured, there are corresponding user locks, and that the database isn't in a suspect state. A suspect state can indicate that the database may need to be repaired, such as after it's improperly closed.
Make your job easier
Chasing down a stray open database can be a real problem for administrators. Fortunately, ADO makes this job much easier with the addition of schema recordsets, which return information about the database. In this article, we showed you how to use this new feature to return a list of current users for a networked database.
Copyright © 2001 Element K Content LLC. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of Element K Content LLC is prohibited. Element K is a service mark of Element K LLC. | https://docs.microsoft.com/en-us/previous-versions/office/developer/office2000/aa155436(v=office.10)?redirectedfrom=MSDN | 2019-09-15T10:34:42 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['images%5caa155436.o2k0135a(en-us,office.10', None], dtype=object)] | docs.microsoft.com |
Displaying and customizing Web Apps and items
Updated on 20-October-2016 at 10:16 AM
Business Catalyst End of life announcement - find out more details.
After creating your Web App and added Web App items, the next step is to insert the Web App listing on your site and then customizing the look and feel of the items.
Insert the Web app listing in a page
- Open the page in editing mode by selecting Site Manager > Pages and clicking the page in the tree view.
- Place the cursor in the content editor where you want to insert the items.
- In the Modules tab of the Toolbox found on the right of the editor, select Web Apps > List of Web App Items.
- Select the Web app and then select the item or items you want to display, then click Insert.
Business Catalyst inserts a placeholder tag representing the Web app module. When you publish the page, Web app items replace the placeholder tag.
- Click Update or Save Draft.
This will display the Web App items in the default listing format. The next step is to customize the Web App layouts so that you can customize how Web App items are displayed on your site.
Customzing Web App layouts
Web App layouts control the way in which Web App items are displayed. There are 4 layouts you can customize:
- Detail layout - This is the large/detailed view of a Web App item. It can be accessed by clicking the name of an item from the list view, or by linking directly to the item detail page.
- Edit layout - This page is used with vistior submitted Web App items. This page appears when the user clicks edit on the live site to update the Web App item.
Note: The edit layout is only used when building visitor-submitted Web Apps. Basic (administrator-submitted) Web Apps do not require an edit interface, because the contributors log into the Admin Console to create and edit items. Visitor-submitted Web Apps can be configured to only let the visitor who submitted the content edit it. They can also be configured to let all registered visitors edit the content items (based on the settings applied in the Web App Details page).
When working with the Edit layout in the online editor in the Admin Console, you can scroll to the bottom and click the Restore to Default button to quickly update the edit layout will all of the custom data fields that you added to the Web App. Although the Restore to Default button is available for the other three layouts, the Edit layout is the only one that populates the code with the custom fields.
- List layout - The list of all of the Web App items when inserted into a page.
Note: Although you can format the List layout with HTML, CSS and JavaScript to update its appearance, you'll use the Toolbox to choose whether the list will display a single Web App item, a list of all items, or a list of the most recent items. Using the interface, you can specify how many items are displayed.
- List layout (backup) - This layout provides a second way to display the list of Web App items.
The List layout (backup) layout is not always used, but you can insert this alternate view of the list of Web App items in situations where you want to display the content of a Web App in a different way. For example, you can feature one specific item, or a series of rotating, random Web App items on the site's home page. This enables you to raise the visibility of the Web App without taking up a lot of screen real estate. In the case of a visitor-submitted Web App, you can use this approach to entice users to register so that they can see all the Web App items, or to encourage them to contribute content to the Web App.
Accessing the layouts
To access the Web App layouts, do one of the following:
Select Site Manager > Module Templates, and then click Web App Layouts.
In the page that appears, select the name of the Web App, and then use the menu to choose the layout you wish to edit: List, List (Backup), Detail or Edit.
Select Web Apps, and then click on the Web App name.
Click Edit Web App Settings, and then click the Layout tab.
Use the menu to choose the layout you wish to edit: List, List (Backup), Detail or Edit.
Using tags to customize the layouts
With the layout of your choice open, you can now insert dynamic tags from the Toolbox menu on the right to customize these layouts. The tags represent dynamic content generated by the data in the custom fields for an item..
For example, if you have created a custom radio list field called "options", you can display the content of this field using:
{tag_options}
When published, the selection for each Web App item is displayed in place of this tag. See below screenshots for further details:
- This is an example of the custom field created:
- This is an example of the tag used for the above custom field.
- This is an example of the Web App item created in the admin:
The above example will render "Option 2" on the live site when inserted on a page. | http://docs.businesscatalyst.com/user-manual/web-apps/update-the-detail-layout | 2019-09-15T10:12:56 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.businesscatalyst.com |
If you need a detailed record over a custom period you can use the In/Out Activity Report. Go to Reports --> In/Out Activity. You can filter by Employee, Department, Location and Position when scheduling is enabled. When you run the report you can export it. Here is an example:
If you need a sample, please ask, we're happy to provide one. | https://docs.buddypunch.com/en/articles/1064242-in-out-activity-report | 2019-09-15T10:35:50 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['https://downloads.intercomcdn.com/i/o/106134059/177c75cb01866ee93c715048/In%3AOut+Activity.png',
None], dtype=object) ] | docs.buddypunch.com |
xml.etree.ElementTree — The ElementTree XML API¶.
Tutorial¶
This is a short tutorial for using
xml.etree.ElementTree (
ET in
short). The goal is to demonstrate some of the building blocks and basic
concepts of the module.
XML tree and elements¶.
Parsing XML¶ can import this data by reading from a file:
import xml.etree.ElementTree as ET tree = ET.parse('country_data.xml') root = tree.getroot()
Or directly'
Note
Not. A document type declaration may be accessed by passing a
custom
TreeBuilder instance to the
XMLParser
constructor..
Finding interesting elements¶.
Modifying an XML File¶>
Building XML documents¶
The
SubElement() function also provides a convenient way to create new
sub-elements for a given element:
>>> a = ET.Element('a') >>> b = ET.SubElement(a, 'b') >>> c = ET.SubElement(a, 'c') >>> d = ET.SubElement(c, 'd') >>> ET.dump(a) <a><b /><c><d /></c></a>
Parsing XML with Namespaces.
Example¶]")
Reference¶
Functions¶
xml.etree.ElementTree.
Comment(text=None)¶.)¶, parser=None)¶
Parses an XML section from a string constant. Same as
XML(). text is a string containing XML data. parser is an optional parser instance. If not given, the standard
XMLParserparser is used. Returns an
Elementinstance.
xml.etree.ElementTree.
fromstringlist(sequence, parser=None)¶)¶
Checks if an object appears to be a valid element object. element is an element instance. Returns a true value if this is an element object.
xml.etree.ElementTree.
iterparse(source, events=None, parser=None)¶)¶)¶)¶ 3.2.
xml.etree.ElementTree.
SubElement(parent, tag, attrib={}, **extra)¶.)¶)¶"?>
xml.etree.ElementTree module:
from xml.etree”:
< Objects¶
- class
xml.etree.ElementTree.
Element(tag, attrib={}, **extra)¶.
tag¶
A string identifying what kind of data this element represents (the element type, in other words).
text¶
tail¶
These attributes can be used to hold additional data associated with the element. Their values are usually strings but may be any application-specific object. If the element is created from an XML file, the text attribute holds either the text between the element’s start tag and its first child or end tag, or
None, and the tail attribute holds either the text between the element’s end tag and the next tag, or
None. For the XML data
<a><b>1<c>2<d/>3</c></b>4</a>
the a element has
None¶.
clear()¶
Resets an element. This function removes all subelements, clears all attributes, and sets the text and tail attributes to
None.
get(key, default=None)¶
Gets the element attribute named key.
Returns the attribute value, or default if the attribute was not found.
items()¶
Returns the element attributes as a sequence of (name, value) pairs. The attributes are returned in an arbitrary order.
keys()¶
Returns the elements attribute names as a list. The names are returned in an arbitrary order.
The following methods work on the element’s children (subelements).
append(subelement)¶
Adds the element subelement to the end of thisiterator(tag=None)¶
Deprecated since version 3.2: Use method
Element.iter()instead.
insert(index, subelement)¶
Inserts subelement at the given position in this element. Raises
TypeErrorif subelement is not an
Element.
iter(tag=None)¶.
makeelement(tag, attrib)¶
Creates a new element object of the same type as this element. Do not call this method, use the
SubElement()factory function instead.
remove(subelement)¶")
ElementTree Objects¶
- class
xml.etree.ElementTree.
ElementTree(element=None, file=None)¶
ElementTree wrapper class. This class represents an entire element hierarchy, and adds some extra support for serialization to and from standard XML.
element is the root element. The tree is initialized with the contents of the XML file if given.
_setroot(element)¶
Replaces the root element for this tree. This discards the current contents of the tree, and replaces it with the given element. Use with care. element is an element instance.
find(match,)¶. Objects¶
- class
xml.etree.ElementTree.
QName(text_or_uri, tag=None)¶ a URI, and this argument is interpreted as a local name.
QNameinstances are opaque.
TreeBuilder Objects¶
- class
xml.etree.ElementTree.
TreeBuilder(element_factory=None)¶)¶
Adds text to the current element. data is a string. This should be either a bytestring, or a Unicode string.
start(tag, attrs)¶
Opens a new element. tag is the element name. attrs is a dictionary containing element attributes. Returns the opened element. 3.2.. | https://docs.python.org/3.7/library/xml.etree.elementtree.html | 2019-09-15T10:08:31 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.python.org |
Contents Now Platform Administration Previous Topic Next Topic Create a SAML logout endpoint Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-platform-administration/page/integrate/saml/task/t_CreateASAMLLogoutEndpoint.html | 2019-09-15T10:32:55 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.servicenow.com |
Contents Security Operations Previous Topic Next Topic Security Operations Integration - Email Search and Delete workflow Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Security Operations Integration - Email Search and Delete workflow The Security Operations Integration - Email Search and Delete workflow returns the number of threat emails from an email server search and, optionally, return details for each email found. After the email search is completed, you can delete the emails. About this task The search query can take some time to complete. After the count is received, approval is required to delete emails from an email server. This workflow is triggered by the Delete from Email Server(s) and Search on Email Server(s) buttons on the Email Search form in a security incident. For more information, see Search for and delete phishing emails. Activities specific to this workflow are described here. For more information on other activities, see Common integration workflow activities. Execution Tracking Begin (Mail Search) activityThe Execution Tracking - Begin (Mail Search) capability execution activity creates an execution tracking record and marks the record state as Started. This activity is used by all capability and implementation workflows to keep track of their state. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-security-management/page/product/security-operations-common/task/secops-integ-email-search-delete.html | 2019-09-15T10:35:16 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.servicenow.com |
Structure your modeling solution
Note
This article applies to Visual Studio 2015. If you're looking for the latest Visual Studio documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2019. Download it here
To use models effectively in a development project, the team members must be able to work on models of different parts of the project at the same time. This topic suggests a scheme for dividing the application into different parts that correspond to the layers in an overall layering diagram.
To start on a project or subproject quickly, it is useful to have a project template that follows the project structure that you have chosen. This topic describes how to create and use such a template.
This topic assumes that you are working on a project that is large enough to require several team members, and perhaps has several teams. The code and models of the project are stored on a source control system such as Team Foundation Server. At least some team members use Visual Studio to develop models, and other team members can view the models by using other Visual Studio versions.
To see which versions of Visual Studio support each tool and modeling feature, see Version support for architecture and modeling tools.
Solution structure
In a medium or large project, the structure of the team is based on the structure of the application. Each team uses a Visual Studio solution.
To divide an application into layers
Base the structure of your solutions on the structure of your application, such as web application, service application, or desktop application. A variety of common architectures is discussed in Application Archetypes in the Microsoft Application Architecture Guide.
Create a Visual Studio solution, which we will call the Architecture solution. This solution will be used to create the overall design of the system. It will contain models but no code.
Add a layer diagram to this solution. On the layer diagram, draw the architecture you have chosen for your application. For example, the diagram might show these layers and the dependencies between them: Presentation; Business logic; and Data.
You can create the layer diagram and a new Visual Studio solution at the same time by using the New UML or Layer Diagram command on the Architecture menu.
Add to the Architecture model UML diagrams that represent the important business concepts, and use cases that are referred to in the design of all the layers.
Create a separate Visual Studio solution for each layer in the Architecture layer diagram.
These solutions will be used to develop the code of the layers.
Create UML models that will represent the designs of the layers and the concepts that are common to all the layers. Arrange the models so that all the models can be seen from the Architecture solution, and the relevant models can be seen from each layer.
You can achieve this by using either of the following procedures. The first alternative creates a separate modeling project for each layer, and the second creates a single modeling project that is shared between the layers.
To use a separate modeling project for each layer
Create a modeling project in each layer solution.
This model will contain UML diagrams that describe the requirements and design of that layer. It can also contain layer diagrams that show nested layers.
You now have a model for each layer, plus a model for the application architecture. Each model is contained in its own solution. This enables team members to work on the layers at the same time.
To the Architecture solution, add the modeling project of each layer solution. To do this, open the Architecture solution. In Solution Explorer, right-click the solution node, point to Add, and then click Existing Project. Navigate to the modeling project (.modelproj) in one layer solution.
Each model is now visible in two solutions: its "home" solution and the Architecture solution.
To the modeling project of each layer, add a layer diagram. Start with a copy of the Architecture layer diagram. You can delete parts that are not dependencies of the layer diagram.
You can also add layer diagrams that represent the detailed structure of this layer.
These diagrams are used to validate the code that is developed in this layer.
In the Architecture solution, edit the requirements and design models of all the layers by using Visual Studio.
In each layer solution, develop the code for that layer, referring to the model. If you are content to do the development without using the same computer to update the model, you can read the model and develop code by using versions of Visual Studio that cannot create models. You can also generate code from the model in these versions.
This method guarantees that no interference will be caused by developers who edit the layer models at the same time.
However, because the models are separate, it is difficult to refer to common concepts. Each model must have its own copy of the elements on which it is dependent from other layers and the architecture. The layer diagram in each layer must be kept in sync with the Architecture layer diagram. It is difficult to maintain synchronization when these elements change, although you could develop tools to accomplish this.
To use a separate package for each layer
In the solution for each layer, add the Architecture modeling project. In Solution Explorer, right-click the solution node, point to Add, and then click Existing Project. The single modeling project can now be accessed from every solution: the Architecture project, and the development project for each layer.
In the shared UML model, create a package for each layer: In Solution Explorer, select the modeling project. In UML Model Explorer, right-click the model root node, point to Add, and then click Package.
Each package will contain UML diagrams that describe the requirements and design of the corresponding layer.
If required, add local layer diagrams for the internal structure of each layer.
This method allows the design elements of each layer to refer directly to those of the layers and common architecture on which it depends.
Although concurrent work on different packages can cause some conflicts, they are fairly easy to manage because the packages are stored in separate files. The major difficulty is caused by the deletion of an element that is referenced from a dependent package. For more information, see Manage models and diagrams under version control.
Creating architecture templates
In practice, you will not create all your Visual Studio solutions at the same time, but add them as the project progresses. You will probably also use the same solution structure in future projects. To help you create new solutions quickly, you can create a solution or project template. You can capture the template in a Visual Studio Integration Extension (VSIX) so that it is easy to distribute and to install on other computers.
For example, if you frequently use solutions that have Presentation, Business, and Data layers, you can configure a template that will create new solutions that have that structure.
To create a solution template
Download and install the Export Template Wizard, if you have not already done this.
Create the solution structure that you want to use as a starting point for future projects.
On the File menu, click Export Template as VSIX. The Export Template as VSIX Wizard opens.
Following the instructions in the wizard, select the projects that you want to include in the template, provide a name and description for the template, and specify an output location.
Note
The material in this topic is abstracted and paraphrased from the Visual Studio Architecture Tooling Guidance, written by the Visual Studio ALM Rangers, which is a collaboration between Most Valued Professionals (MVPs), Microsoft Services, and the Visual Studio product team and writers. Click here to download the complete Guidance package.
Related materials
Organizing and Managing Your Models - video by Clint Edmondson.
Visual Studio Architecture Tooling Guidance – Further guidance on managing models in a team
See Also
Manage models and diagrams under version control Use models in your development process | https://docs.microsoft.com/en-us/visualstudio/modeling/structure-your-modeling-solution?view=vs-2015&redirectedfrom=MSDN | 2019-09-15T10:36:54 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Overview
Thank you for choosing Telerik ® RadDocking!
Are you comfortable handling multiple windows in your application? Save yourself the hassle with RadDocking for Silverlight – a docking system like the one in Microsoft Visual Studio. You get the dockable ToolWindows, a hidden DockingManager control, and a designer to make the creation of attractive layouts easy.
Key Features
RadDocking's key features include:
Save/Load Layout: The control allows you eas thаt the users can easily switch between different views.
Pin/Unpin and Hide Panes: Each RadPane provides built-in pin/unpin functionality. You can read more about this in the Pinned/Unpinned Panes section of the documentation.. | https://docs.telerik.com/devtools/silverlight/controls/raddocking/overview2 | 2019-09-15T09:55:00 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['images/RadDocking_Overview_010.png', 'RadDocking for Silverlight'],
dtype=object)
array(['images/RadDocking_Overview2.png', 'Rad Docking Overview'],
dtype=object) ] | docs.telerik.com |
Donut
Similarly to Pie series, Donut series do not use axes. They visualize each data point as a slice with arc size directly proportional to the magnitude of the raw data point’s value. It is important to note that the donut series are valid only in the context of Pie AreaType. Donut pieces represent data in one dimension contrasting with the other series which represent data in two dimensions. Here is an example of how to create donut series populated with data:
Initial Setup
this.radChartView1.AreaType = ChartAreaType.Pie; DonutSeries series =; this.radChartView1.Series.Add(series);
Me.RadChartView1.AreaType = ChartAreaType.Pie Dim series As Me.RadChartView1.Series.Add(series)
Figure 1: Initial Setup
DonutSeries DonutSeries how to set the Range property:
AngleRange
AngleRange range = new AngleRange(270, 300); series.Range = range;
Dim range As New AngleRange(270, 300) series.Range = range
Figure 2: AngleRange.
InnerRadiusFactor: The property is used to determine the inner radius of the donut series. Like RadiusFactor, its value is used as a percentage of the whole radius, if the RadiusFactor factor is set the value will be calculated according to the entire new radius.
Additionally, DonutSeries allows offsetting a pie segment from the rest of the slices. This is achieved through the OffsetFromCenter property of the individual PieDataPoint. The following snippet demonstrates how to shift the first pie piece:
Donut Offset: Donut Offset
| https://docs.telerik.com/devtools/winforms/controls/chartview/series-types/donut | 2019-09-15T09:37:53 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['images/chartview-series-types-donut001.png',
'chartview-series-types-donut 001'], dtype=object)
array(['images/chartview-series-types-donut002.png',
'chartview-series-types-donut 002'], dtype=object)
array(['images/chartview-series-types-donut003.png',
'chartview-series-types-donut 003'], dtype=object)] | docs.telerik.com |
-
-
-
-
-
-
display event severities and SNMP traps details on NetScaler MAS
When you create an event and its settings in NetScaler MAS, you can view the event immediately on the Event Summary page. Similarly, you can view and monitor the health. The Infrastructure Dashboard displays up time, models, and the versions of all NetScaler instances added to your NetScaler MAS server.
On the Infrastructure dashboard, you can now mask irrelevant values so that you can more easily view and monitor the NetScaler instances. You can see information such as event by severities, health, up time, models, and version of NetScaler instances in detail.
For example, events with a Critical severity level might occur rarely. However, when these critical events occur on your network, you might want to further investigate, troubleshoot, and monitor the event. If you select the severity levels except Critical, the graph displays only the occurrences of critical events.
Click the graph to go the Severity based events page. You can see the event details when a critical event occurred for the duration that you’ve selected: the instance source, the date, category, and message notification sent when the critical event occurred.
Similarly, you can view the health of a NetScaler VPX instance on the Dashboard. You can mask the time during which the instance was up and running, and display only the times the instance was out of service. By clicking the graph, you are taken to that instance’s page, where the out of service filter is already applied, and see details such as host name, the number of HTTP requests it received per second, CPU usage. You can also select the instance and see that the particular NetScaler instance’s dashboard for more details
To select specific events by severity in NetScaler MAS:
Log on to NetScaler MAS,. The length of each section corresponds to the total number of events of that type of severity.
You can click each section on the donut chart to display the corresponding Severity based events page. This page shows the following details for the selected severity for the duration specified:
Instance Source
Data of the event
Category of events generated by the NetScaler instance
Message notification sent
Note:
Below the donut chart you can see a list of severities that are represented in the chart. By default, a donut chart displays the events of all severity types, and therefore the severity types in the list are highlighted. You can toggle the severity types to more easily view and monitor your chosen severity.
To view NetScaler SNMP trap details on NetScaler MAS
You can now view the details of each SNMP trap received from its managed NetScaler instances on the NetScaler MAS.”
How to display event severities and SNMP traps details on NetScaler MAS | https://docs.citrix.com/en-us/netscaler-mas/12/event-management/how-to-display-event-severities-and-skews-of-SNM-traps-infrastructure-dashboard-mas.html | 2019-09-15T10:44:05 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['/en-us/netscaler-mas/12/media/EventSettingsSNMPDetails.png',
'localized image'], dtype=object) ] | docs.citrix.com |
[]{"FirstName", "LastName"});
With the above example a specific Person is being read but only its ‘FirstName’ and ‘LastName’ will contains values and all the other properties will contain a
null value.
You may use the same approach with the
SqlQuery or
IdsQuery:
SqlQuery<Person> query = new SqlQuery<Person>("") .ReadMultiple(docQuery);
Supported Operations
A projection is defined for any operation that returns data from the Space. Therefore id-based or query-based operations support projections. You can use the Projection API with
Read,
Take,
ReadById,
TakeById,
ReadMultiple and
TakeMultiple operations. When performing a
Take operation with projection, the entire Object will be removed from the space, but the result returned to the user will contain only the projected properties.
You can use projections with a Notify Container, when subscribing to notifications. You can use it with a Polling Container, when consuming Space Objects. You can also create a Local View with templates or a
View using projections. The local view will maintain the relevant objects, but the view of the data will contain only the projected properties.
Both dynamic and fixed properties can be specified - the syntax is the same. As a result, when providing a property name which is not part of the property set, it will be treated as a dynamic property: That is, if there is no like-named dynamic property present on a query result Object, then the property will be ignored entirely (and no Exception will be thrown). Please note that a result may contain multiple objects, each with a different combination of properties (fixed and/or dynamic) - each object will be treated individually when applying projections to it.
Considerations
- You can’t use a projection on Local Cache, as the Local Cache needs to contain fully constructed objects. Reconstructing an Object locally with projection would only negatively impact performance.
- You can’t use a projection to query a Local View for the same reason as for Local Cache. However, you can create a Local View with a projection template in which case the Local View will contain the Objects in their projected form.
Working Examples
This repository (Scala) contains an integration test that performs projection on a query in the context of Executor Based Remoting. Relevant lines of code (Scala) are here . | https://docs.gigaspaces.com/xap/12.1/dev-dotnet/query-partial-results.html | 2019-09-15T09:48:17 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.gigaspaces.com |
Hiding a retail price for items that are not on sale
Updated on 20-October-2016 at 10:16 AM
Business Catalyst End of life announcement - find out more details.
Add the following script to the large or small products layout for an online store:
<script language="javascript"><!-- var onsale_{tag_productid} = "{tag_onsale}"; if (onsale_{tag_productid} == "0") { document.getElementById("rrpprice_{tag_productid}").style.display = 'none'; } //--></script>
Then you would assign the following tag ID to a retail price tag
<div id="rrpprice_{tag_productid}">{tag_retailprice}</div> | http://docs.businesscatalyst.com/user-manual/e-Commerce/hiding-a-retail-price-for-items-that-are-not-on-sale | 2019-09-15T10:42:50 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.businesscatalyst.com |
Kafka is a highly scalable messaging platform that provides a method for distributing information through a series of messages organised by a specified topic. With Tungsten Clustering the incoming stream of data from the upstream replicator is converted, on a row by row basis, into a JSON document that contains the row information. A new message is created for each row, even from multiple-row transactions.
The deployment of Tungsten Clustering to Kafka service is slightly different. There are two parts to the process:
Service Alpha on the master extracts the information from the MySQL binary log into THL.
Service Alpha on the slave reads the information from the remote replicator as THL, and applies that to Kafka.
With the Kafka applier, information is extracted from the source database using the row-format, column names and primary keys are identified, and translated to a JSON format, and then embedded into a larger Kafka message. The topic used is either composed from the schema name or can be configured to use an explicit topic type, and the generated information included in the Kafka message can include the source schema, table, and commit time information..
The THL information is then applied to Kafka using the Kafka applier.
There are some additional considerations when applying to Kafka that should be taken into account:
Because Kafka is a message queue and not a database, traditional transactional semantics are not supported. This means that although the data will be applied to Kafka as a message, there is no guarantee of transactional consistency. By default the applier will ensure that the message has been correctly received by the Kafka service, it is the responsibility of the Kafka environment and configuration to ensure delivery. The replicator.applier.dbms.zookeeperString can be used to ensure acknowledgements are received from the Kafka service.
One message is sent for each row of source information in each transaction. For example, if 20 rows have been inserted or updated in a single transaction, then 20 separate Kafka messages will be generated.
A separate message is broadcast for each operation, and includes the operation type. A single message will be broadcast for each row for each operation. So if 20 rows are delete, 20 messages are generated, each with the operation type.
If replication fails in the middle of a large transaction, and the
replicator goes
OFFLINE, when the
replicator goes online it may resend rows and messages.
The two replication services can operate on the same machine, (See Section 6.2, “Deploying Multiple Replicators on a Single Host”) or they can be installed on two different machines. | http://docs.continuent.com/tungsten-replicator-5.2/deployment-applier-kafka.html | 2019-09-15T10:28:01 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.continuent.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
AWS Lambda-backed Custom Resources
When you associate a Lambda function with a custom resource, the function is invoked whenever the custom resource is created, updated, or deleted. AWS CloudFormation calls a Lambda API to invoke the function and to pass all the request data (such as the request type and resource properties) to the function. The power and customizability of Lambda functions in combination with AWS CloudFormation enable a wide range of scenarios, such as dynamically looking up AMI IDs during stack creation, or implementing and using utility functions, such as string reversal functions. | https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html | 2019-09-15T10:52:14 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.aws.amazon.com |
veraPDF CLI Quick Start Guide
The veraPDF command line interface is the best way of processing batches of PDF/A files. It’s designed for integrating with scripted workflows, or for shell invocation from programs.
We assume you’ve already downloaded and installed the software, if not please read the installation guide first.
Using the terminal
We’ve provided a quick primer on setting up and using the terminal on our supported platforms here.
Getting help
You can get the software to output its built in CLI usage message by typing verapdf.bat -h or verapdf --help, an online version is available here.
Configuring veraPDF
veraPDF is controlled by a set of configuration files, you can read a brief overview here.
How-tos
The following examples all make use of the veraPDF test corpus. This is
available on GitHub. It is also
installed with the veraPDF software if you enable it
at step 3. The test corpus will be installed in a
sub-directory called
corpus. The examples assume your terminal session
is running in the installation directory with a suitable alias set up to avoid
typing path-to-verapdf/verapdf. On a Mac or Linux box this can be set up by typing export verapdf='export verapdf='path-to-verapdf/verapdf' at the command line.
Links to how-tos
- Using veraPDF for PDF/A Validation:
- Extracting features (metadata) from PDFs:
- Enforcing institutional policy.
- Fixing PDF/A metadata. | http://docs.verapdf.org/cli/ | 2017-06-22T22:04:06 | CC-MAIN-2017-26 | 1498128319912.4 | [] | docs.verapdf.org |
Materials¶
We have pre-defined a few materials. You are free to define additional materials, as in:
sv:Ma/Air/Components=4 "Carbon" "Nitrogen" "Oxygen" "Argon" # names of elements uv:Ma/Air/Fractions=4 0.000124 0.755268 0.231781 0.012827 # fractions of elements d:Ma/Air/Density=1.2048 mg/cm3 d:Ma/Air/MeanExcitationEnergy=85.7 eV s:Ma/Air/DefaultColor="lightblue"
All Elements have been pre-defined with natural isotope abundance from the NIST database. You will only need to create your own Elements if you need something other than natural isotope abundance. For that, see Elements and Isotopes below.
Fractions are by weight.
MeanExcitationEnergy is the
I parameter in the Bethe equation, which not only includes ionization, but also inner-atomic excitations, etc.
In the Default Parameters section, we show the complete list or pre-defined materials. This basically covers those materials that are used in our included examples.
You may also use any of the Materials and Compounds that are defined by default in Geant4. The names start with the prefix,
G4_, such as:
G4_Al,
G4_Ti,
G4_MUSCLE_SKELETAL_ICRP, etc. The complete list of these materials and compounds can be found here.
- NIST material names must be specified with exact case.
- As of this writing, the mean excitation energy listed in the above reference for
G4_WATERis incorrect. It lists
G4_WATERmean excitation energy as 75.0 eV but it is actually set to 78.0 eV.
Note
The Geant4-DNA physics processes have special behavior for
G4_WATER. They take into account the material’s molecular properties rather than just the atomic properties. Accordingly, you should use
G4_WATER rather than defining your own Water, unless you have some other reason to make a specific change (such as changing the mean excitation energy to something other than 78.0 eV).
It is up to you to define any additional materials that you want in your own parameter files.
If you make your own material, make sure to pick a new material name (the string after the
Ma/) and make sure that any other parameter file that uses this material includes the file where you defined this material (either directly or through Parameter File Chains). The usual rules of Parameter File Graphs govern parameter conflicts.
Warning
Do not use the prefix
G4_ for the materials that you add. This prefix is reserved for materials and compounds from the pre-defined NIST database.
Where a pre-defined material definition exists, it is generally better to use that definition rather than making your own material. The pre-defined material may provide extra benefit by triggering specific corrections to ionization models.
If you have a set of materials that differ only in density, you can define them all at once (this is commonly needed for imaging to material conversion):
i:Ma/MyMaterial/VariableDensityBins = 100 u:Ma/MyMaterial/VariableDensityMin = .1 u:Ma/MyMaterial/VariableDensityMax = 10.
will generate 100 versions of
MyMaterial, with densities varying from .1 x normal to 10. x normal. The material names will then be like:
MyMaterial_VariableDensityBin_0 MyMaterial_VariableDensityBin_1 ... MyMaterial_VariableDensityBin_99
Elements and Isotopes¶
All Elements have been pre-defined with natural isotope abundance from the NIST database. You will only need to create your own Elements if you need something other than natural Isotope abundance. You can define additional elements as follows:
Define each isotope that you will use, specifying
Z,
N and
A:
i:Is/U235/Z = 92 i:Is/U235/N = 235 d:Is/U235/A = 235.01 g/mole i:Is/U238/Z = 92 i:Is/U238/N = 238 d:Is/U238/A = 238.03 g/mole
Define your element with your desired proportion of these isotopes:
s:El/MyEIU/Symbol = "MyElU" sv:El/MyElU/IsotopeNames = 2 "U235" "U238" uv:El/MyElU/IsotopeAbundances = 2 90. 10.
See Isotope.txt example. | http://topas.readthedocs.io/en/latest/parameters/material.html | 2017-06-22T22:11:45 | CC-MAIN-2017-26 | 1498128319912.4 | [] | topas.readthedocs.io |
User Guide
Local Navigation
Changing transformation properties
You can use the Transformation tab in the Inspector to specify transformation properties for an object to change the object’s appearance. You can change an object's position, size, rotation angle, or you can skew its x-axis or y-axis. You can also configure visibility properties and opacity properties.
You can also use the Transformation tool on the Toolbox to change the position, size, and rotation properties directly on the workspace.
You can animate transformation properties.
- Set the position coordinates for an object
- Scale an object
- Skew an object
- Rotate an object
- Make an object visible or invisible for the current frame
- Specify the opacity level of an object
Next topic: Set the position coordinates for an object
Previous topic: Using bitmap image effects
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/21108/Changing_transformation_properties_630031_11.jsp | 2014-10-20T08:46:59 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.blackberry.com |
About file encryption
File encryption is designed to protect files that you store on your BlackBerry smartphone and on a media card that can be inserted in your smartphone. You can encrypt the files on your smartphone and on your media card using an encryption key that your smartphone generates, your smartphone password, or both.
If you encrypt the files using an encryption key that your smartphone generates, you can only access the files on your media card when the media card is inserted in your smartphone. If you encrypt the files using your smartphone password, you can access the files on your media card in any smartphone that you insert your media card into, as long as you know the password for the smartphone.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/38346/1571287.jsp | 2014-10-20T08:40:16 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.blackberry.com |
Edit this website:
For more information on you can read up on Confluence and Documentation in our Developers Guide.
2.2.x is expected to be the stable branch of Geotools until July, at which point 2.3.x will be tested enough to become the new stable branch.
Geotools is an open source Java GIS toolkit for developing standards compliant solutions. It provides an implementation of Open Geospatial Consortium (OGC) specifications as they are developed.. | http://docs.codehaus.org/pages/viewpage.action?pageId=49042 | 2014-10-20T08:12:46 | CC-MAIN-2014-42 | 1413507442288.9 | [array(['/download/attachments/593/geotools_banner_small.gif?version=2&modificationDate=1125015458988&api=v2',
None], dtype=object) ] | docs.codehaus.org |
Home of BTM, the_1<<
This is BTM's strongest point compared to its competitors: it is trivial to configure and when something goes wrong it is much easier to figure out what to do thanks to the great care placed in useful error reporting and logging.
Using BTM
- Overview
- New user's guide
- Download
- Roadmap
- Recommended Readings
- Getting support
- JTA Best Practice | http://docs.codehaus.org/pages/viewpage.action?pageId=85983258 | 2014-10-20T08:27:11 | CC-MAIN-2014-42 | 1413507442288.9 | [array(['http://www.bitronix.be/images/shim.gif', None], dtype=object)
array(['/download/attachments/5243197/goal.gif?version=1&modificationDate=1224772172994&api=v2',
None], dtype=object) ] | docs.codehaus.org |
A software framework is the base of an application that can be used by a developer. The framework in Joomla! 1.5 unleashes a great deal of power for them. The Joomla! code has been completely overhauled and cleaned up.!
You need Joomla! 1.5 or greater for this tutorial.
While the idea behind a component may seem extremely simple, code can quickly become very complex as additional features are added or the interface is customized.
Model-View-Controller (herein, the Entry Point is depicted as a small circle and attached to the viewer, the Template is added. With these five components you should be able to understand this tutorial of making a basic Joomla! MVC component.
Part 1 of the tutorial only focusses on the Controller and the Viewer (with the use of the Template); these are are marked with the blue colour in the picture. Part 2 adds, and Part 3 extends, the model functionality for data manipulation abstraction; marked with the green colour in the picture.
Keep in mind that this simplified picture only applies for the site section, the front-end user. An identical picture is applicable for the admin section. The administrative section is taken care of in Part 4 and further of this component development tutorial. Both the site and the admin section are maintained and configured in an XML based installation file.
In Joomla!, the MVC pattern is implemented using three classes: JModel, JView and JController. For more detailed information about these classes, please refer to the API reference documentation (WIP).
For learning purposed and debugging, adding a run-time debugger to your Joomla! site might be a good extension especially during development of your (tutorial) component. A good example is the community project J!Dump that has the advantage of being a pop-up thus leaving the view output unchanged (if no pop-up then all dump() debugging is removed) and the system allows you to view not only your development properties but also methods.
For our basic component, we only require five files:
Remember that the filename for the entry point must have the same name of the component. For example, if you call your component "Very Intricate Name Component", atVar( ).
The task of the view is very simple: It retrieves the data to be displayed and pushes it into the template. Data is pushed into the template using the JView::assignRef method.
The code for the view at site/views/hello/view.html.php:
<) { $greeting = "Hello World!"; $this->assignRef( 'greeting', $greeting ); parent::display($tpl); } }:
The format of the XML file at hello.xml is as follows:
<?xml version="1.0" encoding="utf-8"?> <install type="component" version="1.5.0"> <name>Hello</name> <!-- The following elements are optional and free of formatting constraints --> >
If you look closely you will notice that there are some files that will be copied that we have not discussed. These are the index.html other file is the hello.php file. This is the entry point for the admin section of our component. Since we don't have an admin section of our component, it will have the same content as the index.html files at this point in time.
Developing a Model-View-Controller Component - Part 2 - Adding a Model
Developing a Model-View-Controller Component - Part 3 - Using the Database
Developing a Model-View-Controller Component - Part 4 - Creating an Administrator Interface
The component can be downloaded at: com_hello1_01 | http://docs.joomla.org/index.php?title=Developing_a_Model-View-Controller_Component/1.5/Introduction&diff=15243&oldid=15242 | 2014-10-20T09:22:11 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.joomla.org |
.
This CirTap (talk| contribs) 6 years ago. (Purge)). | http://docs.joomla.org/index.php?title=JDOC:Policies_and_guidelines&oldid=6742 | 2014-10-20T09:13:28 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.joomla.org |
In most cases, when you place a straight wall, it has a rectangular profile when viewed in elevations parallel to its length. If your design calls for a different shape of profile, or for openings in the wall, use the following procedure to edit the wall’s elevation profile in either a section or elevation view.
Design with non-rectangular walls and cut openings
To edit the profile of a wall
If the active view is a plan view, the Go To View dialog displays, prompting you to select an appropriate elevation or section view. For example, for a north wall, you could select either the North or South elevation view.
When an appropriate view is open, the profile of the wall displays in magenta model lines, as shown.
Wall modified
Sketch lines unlocked
When you edit the elevation profile of a wall that spans multiple levels and create notches such as those shown below, the new vertical edges represent jambs that are referred to in Revit as mid-end faces. Other walls can form corner joins with mid-end faces. See Joining Walls to Mid-End Faces
Wall elevation profile edited to create notches
Edited wall in 3D view
You can also create mid-end faces using the Wall Opening tool. See Cutting Rectangular Openings in Walls. | http://docs.autodesk.com/REVIT/2011/ENU/filesUsersGuide/WS73099cc142f487557bfaa8da122ec01152f52bd.htm | 2014-10-20T08:06:59 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.autodesk.com |
The typing input method that you use determines how you type. For example, if you're using the predictive input method, your BlackBerry smartphone displays a list of suggested words as you type so that you don't have to type the entire word.
The language that you're typing in determines the typing input methods that are available. If you're typing in a language that has multiple typing input methods, you can switch between typing input methods when you're typing. When you're typing in certain fields such as password fields, your smartphone might automatically switch your typing input method. | http://docs.blackberry.com/en/smartphone_users/deliverables/44780/1794141.html | 2014-10-20T08:31:26 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.blackberry.com |
Instructions (for 2.2+ releases)
Releases are now directly made from the CI server, which dramatically simplifies the procedure. Basically, the process is fully automated and only involves filling a form. The CI server will take care of:
- checking out the sources
- running the test suite against the release version of the JDK
- build the artifacts
- upload the artifacts to Bintray
- synchronize the version with Maven Central
- unzip the documentation to the web server
- tag and push the tag
First step is to login to the CI server. You need a priviledged account to perform a release. If you don't have such an account, ask the Groovy project manager.
- log into
- click on the release plan
- then on the Artifactory Release Management tab
This is where you will be able to perform the release. You need to fill the form with the following values:
Image Added
- groovyVersion : normal Groovy
- groovyBundleVersion : OSGI specific
- checkout branch is very important and corresponds to the branch we want to release from. In the screenshot, we are releasing from the GROOVY_2_3_0_RC_X branch
- Use release branch is not mandatory. If checked, then a specific branch will be used to release. This can be useful if you expect commits to be pushed while a release is in progress.
- Create VCS tag should always be checked. Put here the name of the release tag to be pushed after the release is done.
- Artifactory configuration: choose the oss-release-local repository.
- Staging comment is useful when we go to the artifactory UI. Put here a comment about the release.
- Then click on "Build and Release to Artifactory":
- the build number which failed
- the directory where artifacts are published. This you can find in the build log (check a line like [14:08:08]: Checkout directory: /var/teamcity/buildagent-jdk7/work/c88d543299f59f28)Dir = new File(system.get('teamcity.build.checkoutDir')) -> def buildDir='/path/to/build/dir'
- def version= configParams['org.jfrog.artifactory.releaseManagement.releaseVersion_groovyVersion'] -> def version = 'groovy-xxx'
def buildNumber = configParams['build.number'] -> def buildNumber = '35'
The script uses log.message and log.error, which are not defined if you copied the script from the build step. If this is the case, add the following line after "def buildNumber=":
- def log = [message: { println it }, error: {println it} ]
then you will find a number of flags that tell which step should be executed. Update the flags accordingly to the steps which failed, then execute the script.
Instructions (for 2.0+ releases)
... | http://docs.codehaus.org/pages/diffpages.action?pageId=11454&originalId=233046815 | 2014-10-20T08:39:35 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.codehaus.org |
This page describes the actions you can perform on individual issues. Those actions can also be performed on sets of issues in bulk change operations.
To perform review actions on an issue, you must be logged in and have the Browse permission on the project the issue is in (plus the Administer Issues permission to flag the issue false positive or to change its severity). If you have the correct permissions, you see a row of links added to the bottom of the issue block.
Starting a thread of discussion
To start a thread of discussion, simply use the Comment link.
Each comment is added to the issue block in a running log. You may edit or delete your own comments.
Marking an issue as false positive
To mark an issue as false positive, click on the False Positive link.
Note that false positives are not displayed by default in the Component viewer. To display them, select False positives in the drop-down list:
If you tend to mark a lot of issues as false positives, it means that some coding rules are not adapted to your context. So, you can either completely deactivate them in the quality profile or use issue exclusions to narrow the focus of the rules so they are not used on specific parts (or types of object) of your application.
Assigning an issue to a developer
Any issues (whose status is Open or Reopened or Confirmed) can be assigned to a developer by clicking on the Assign link.
Because issues are fully integrated within the Notification service, developers can receive email notifications when issues are assigned to them, changes are made on issues reported by them, etc.
Changing the severity of an issue
The severity of any issues can be changed by clicking on the Change severity link.
Linking an issue to an action plan
Action plans can be created to group issues. Action plans are buckets of issues that you want to group because they will to have similar timeframe for resolution.
Action plans can be created by project administrators from Configuration > Action Plans:
Each issue can then be linked to an action plan:
The Action Plans widget lets you track action plan status:
Both the issue counts and the segments of the bar graph are linked to the lists of relevant issues.
Viewing an Issue change log
The change log of an issue can be displayed by clicking on the title of the rule and then on the Changelog link:
Linking an Issue to an External Task Manager
It is possible to link an issue to an external task manager. To link issues to JIRA for example, you can install the SonarQube JIRA plugin. | http://docs.codehaus.org/pages/viewpage.action?pageId=238911643 | 2014-10-20T08:21:01 | CC-MAIN-2014-42 | 1413507442288.9 | [array(['https://raw.github.com/SonarSource/screenshots/master/issues/issue-workflow.png',
None], dtype=object)
array(['https://raw.github.com/SonarSource/screenshots/master/issues/issue-workflow-comment.png',
None], dtype=object)
array(['https://raw.github.com/SonarSource/screenshots/master/issues/issues-drilldown-display-false-positives.png',
None], dtype=object)
array(['https://raw.github.com/SonarSource/screenshots/master/settings/action-plans-list.png',
None], dtype=object)
array(['https://raw.github.com/SonarSource/screenshots/master/issues/issue-workflow-link-to-action-plan.png',
None], dtype=object)
array(['https://raw.github.com/SonarSource/screenshots/master/issues/open-action-plans-widget.png',
None], dtype=object)
array(['https://raw.github.com/SonarSource/screenshots/master/issues/issue-workflow-changelog.png',
None], dtype=object) ] | docs.codehaus.org |
.NET Core Support
This is a general description of .NET Core support.
LTS and Current Release Trains
Having two support release trains is a common concept in use throughout the software world, specially for open-source projects like .NET Core. .NET Core has the following support release trains: Long Term Support (LTS) and Current. LTS releases are maintained for stability over their lifecycle, receiving fixes for important issues and security fixes. New feature work and additional bug fixes take place in Current releases. From a support perspective, these release trains have the following support lifecycle attributes.
LTS releases are
- Supported for three years after the general availability date of a LTS release
- Or one year after the general availability of a subsequent LTS release
Current releases are
- Supported within the same three-year window as the parent LTS release
- Supported for three months after the general availability of a subsequent Current release
- And one year after the general availability of a subsequent LTS release
Versioning
New LTS releases are marked by an increase in the Major version number. Current releases have the same Major number as the corresponding LTS train and are marked by an increase in the Minor version number. For example, 1.0.3 would be LTS and 1.1.0 would be Current. Bug fix updates to either train increment the Patch version. For more information on the versioning scheme, see .NET Core Versioning.
What causes updates in LTS and Current trains?
To understand what specific changes, such as bug fixes or the addition of APIs, cause updates to the version numbers review the Decision Tree section in the Versioning Documentation. There is not a golden set of rules that decide what changes are pulled into the LTS branch from Current. Typically, necessary security updates and patches that fix expected behaviour are reasons to update the LTS branch. We also intend to support recent desktop developer operating systems on the LTS branch, though this may not always be possible. A good way to catch up on what APIs, fixes, and operating systems are supported in a certain release is to browse its release notes on GitHub. | https://docs.microsoft.com/en-us/dotnet/core/versions/lts-current | 2017-12-11T02:07:56 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.microsoft.com |
Start layout XML for mobile editions of Windows 10 (reference)
Applies to
- Windows 10
Looking for consumer information? See Customize the Start menu
On Windows 10 Mobile, you can use the XML-based layout to modify the Start screen and provide the most robust and complete Start customization experience.
On Windows 10 Mobile, the customized Start works by:
- Windows 10 performs checks to determine the correct base default layout. The checks include the mobile edition, whether the device is dual SIM, the column width, and whether Cortana is supported for the country/region.
- Windows 10 ensures that it does not overwrite the layout that you have set and will sequence the level checks and read the file layout such that any multivariant settings that you have set is not overwritten.
- Windows 10 reads the LayoutModification.xml file and appends the group to the Start screen.
Default Start layouts
The following diagrams show the default Windows 10, version 1607 Start layouts for single SIM and dual SIM devices with Cortana support, and single SIM and dual SIM devices with no Cortana support.
The diagrams show:
- Tile coordinates - These are determined by the row number and the column number.
- Fold - Tiles "above the fold" are visible when users first navigate to the Start screen. Tiles "below the fold" are visible after users scroll up.
- Partner-customizable tiles - OEM and mobile operator partners can customize these areas of the Start screen by prepinning content. The partner configurable slots are:
- Rows 6-9
- Rows 16-19
LayoutModification XML
IT admins can provision the Start layout by creating a LayoutModification.xml file. This file supports several mechanisms to modify or replace the default Start layout and its tiles.
Note
To make sure the Start layout XML parser processes your file correctly, follow these guidelines when writing your LayoutModification.xml file:
- Do not leave spaces or white lines in between each element.
- Do not add comments inside the StartLayout node or any of its children elements.
- Do not add multiple rows of comments.
The following table lists the supported elements and attributes for the LayoutModification.xml file.
start:Group
start:Group tags specify a group of tiles that will be appended to Start. You can set the Name attribute to specify a name for the Start group.
Note
Windows 10 Mobile only supports one Start group.
For Windows 10 Mobile, start:Group tags can contain the following tags or elements:
- start:Tile
- start:SecondaryTile
- start:PhoneLegacyTile
- start:Folder
Specify Start tiles
To pin tiles to Start, you must use the right kind of tile depending on what you want to pin.
Tile size and coordinates
All tile types require a size (Size) and coordinates (Row and Column) attributes regardless of the tile type that you use when prepinning items to Start.
The following table describes the attributes that you must use to specify the size and location for the tile.
For example, a tile with Size="2x2", Row="2", and Column="2" results in a tile located at (2,2) where (0,0) is the top-left corner of a group.
start:Tile
You can use the start:Tile tag to pin a Universal Windows app to Start.
To specify an app, you must set the AppUserModelID attribute to the application user model ID that's associated with the corresponding app.
The following example shows how to pin the Microsoft Edge Universal Windows app:
<start:Tile
start:SecondaryTile
You can use the start:SecondaryTile tag to pin a Web link through a Microsoft Edge secondary tile.
The following example shows how to create a tile of the Web site's URL using the Microsoft Edge secondary tile:
<start:SecondaryTile
The following table describes the other attributes that you can use with the start:SecondaryTile tag in addition to Size, Row, and Column.
Secondary Microsoft Edge tiles have the same size and location behavior as a Universal Windows app.
start:PhoneLegacyTile
You can use the start:PhoneLegacyTile tag to add a mobile app that has a valid ProductID, which you can find in the app's manifest file. The ProductID attribute must be set to the GUID of the app.
The following example shows how to add a mobile app with a valid ProductID using the start:PhoneLegacyTile tag:
<start:PhoneLegacyTile
start:Folder
You can use the start:Folder tag to add a folder to the mobile device's Start screen.
You must set these attributes to specify the size and location of the folder: Size, Row, and Column.
Optionally, you can also specify a folder name by using the Name attribute. If you specify a name, set the value to a string.
The position of the tiles inside a folder is relative to the folder. You can add any of the following tile types to the folder:
- Tile - Use to pin a Universal Windows app to Start.
- SecondaryTile - Use to pin a Web link through a Microsoft Edge secondary tile.
- PhoneLegacyTile - Use to pin a mobile app that has a valid ProductID.
The following example shows how to add a medium folder that contains two apps inside it:
<start:Folder <start:Tile <start:PhoneLegacyTile </start:Folder>
RequiredStartTiles
You can use the RequiredStartTiles tag to specify the tiles that will be pinned to the bottom of the Start screen even if a restored Start screen does not have the tiles during backup or restore.
Note
Enabling this Start customization may be disruptive to the user experience.
For Windows 10 Mobile, RequiredStartTiles tags can contain the following tags or elements. These are similar to the tiles supported in start:Group.
- Tile - Use to pin a Universal Windows app to Start.
- SecondaryTile - Use to pin a Web link through a Microsoft Edge secondary tile.
- PhoneLegacyTile - Use to pin a mobile app that has a valid ProductID.
- Folder - Use to pin a folder to the mobile device's Start screen.
Tiles specified within the RequiredStartTiles tag have the following behavior:
- The partner-pinned tiles will begin in a new row at the end of the user-restored Start screen.
- If there’s a duplicate tile between what the user has in their Start screen layout and what the OEM has pinned to the Start screen, only the app or tile shown in the user-restored Start screen layout will be shown and the duplicate tile will be omitted from the pinned partner tiles at the bottom of the Start screen.
The lack of duplication only applies to pinned apps. Pinned Web links may be duplicated.
- If partners have prepinned folders to the Start screen, Windows 10 treats these folders in the same way as appended apps on the Start screen. Duplicate folders will be removed.
- All partner tiles that are appended to the bottom of the user-restored Start screen will be medium-sized. There will be no gaps in the appended partner Start screen layout. Windows 10 will shift tiles accordingly to prevent gaps.
Sample LayoutModification.xml
The following sample LayoutModification.xml shows how you can configure the Start layout for devices running Windows 10 Mobile:
<?xml version="1.0" encoding="utf-8"?> <LayoutModificationTemplate xmlns="" xmlns: <DefaultLayoutOverride> <StartLayoutCollection> <defaultlayout:StartLayout> <start:Group <start:Tile <start:Tile </start:Group> </defaultlayout:StartLayout> </StartLayoutCollection> </DefaultLayoutOverride> <RequiredStartTiles> <PhoneLegacyTile ProductID="{b00d3141-1caa-43aa-b0b5-78c1acf778fd}"/> <PhoneLegacyTile ProductID="{C3F8E570-68B3-4D6A-BDBB-C0A3F4360A51}"/> <PhoneLegacyTile ProductID="{C60904B7-8DF4-4C2E-A417-C8E1AB2E51C7}"/> <Tile AppUserModelID="Microsoft.MicrosoftFeedback_8wekyb3d8bbwe!ApplicationID"/> </RequiredStartTiles> </LayoutModificationTemplate>
Use Windows Provisioning multivariant support
The Windows Provisioning multivariant capability allows you to declare target conditions that, when met, supply specific customizations for each variant condition. For Start customization, you can create specific layouts for each variant that you have. To do this, you must create a separate LayoutModification.xml file for each variant that you want to support and then include these in your provisioning package. For more information on how to do this, see Create a provisioning package with multivariant settings.
The provisioning engine chooses the right customization file based on the target conditions that were met, adds the file in the location that's specified for the setting, and then uses the specific file to customize Start. To differentiate between layouts, you can add modifiers to the LayoutModification.xml filename such as "LayoutCustomization1". Regardless of the modifier that you use, the provsioning engine will always output "LayoutCustomization.xml" so that the OS has a consistent file name to query against.
For example, if you want to ensure that there's a specific layout for a certain mobile operator in a certain country/region, you can:
- Create a specific layout customization file and then name it LayoutCustomization1.xml.
- Include the file as part of your provisioning package.
- Create your multivariant target and reference the XML file within the target condition in the main customization XML file.
The following example shows what the overall customization file might look like with multivariant support for Start:
<?xml version="1.0" encoding="utf-8"?> <WindowsCustomizatons> <PackageConfig xmlns="urn:schemas-Microsoft-com:Windows-ICD-Package-Config.v1.0"> <ID>{6aaa4dfa-00d7-4aaa-8adf-73c6a7e2501e}</ID> <Name>My Provisioning Package</Name> <Version>1.0</Version> <OwnerType>OEM</OwnerType> <Rank>50</Rank> </PackageConfig> <Settings xmlns="urn:schemas-microsoft-com:windows-provisioning"> <Customizations> <Targets> <Target Id="Operator XYZ"> <TargetState> <Condition Name="MCC" Value="Range:310, 320" /> <Condition Name="MNC" Value="!Range:400, 550" /> </TargetState> </Target> <Target Id="Processor ABC"> <TargetState> <TargetState> <Condition Name="ProcessorName" Value="Pattern:.*Celeron.*" /> <Condition Name="ProcessorType" Value="Pattern:.*I|intel.*" /> </TargetState> </TargetState> </Target> </Targets> <Common> <Settings> <Policies> <AllowBrowser>1</AllowBrowser> <AllowCamera>1</AllowCamera> <AllowBluetooth>1</AllowBluetooth> </Policies> <HotSpot> <Enabled>1</Enabled> </HotSpot> </Settings> </Common> <Variant> <TargetRefs> <TargetRef Id="Operator XYZ" /> </TargetRefs> <Settings> <StartLayout>c:\users\<userprofile>\appdata\local\Microsoft\Windows\Shell\LayoutCustomization1.XML</StartLayout> <HotSpot> <Enabled>1</Enabled> </HotSpot> </Settings> </Variant> </Customizations> </Settings> </WindowsCustomizatons>
When the condition is met, the provisioning engine takes the XML file and places it in the location that Windows 10 has set and then the Start subsystem reads the file and applies the specific customized layout.
You must repeat this process for all variants that you want to support so that each variant can have a distinct layout for each of the conditions and targets that need to be supported. For example, if you add a Language condition, you can create a Start layout that has it's own localized group or folder titles.
Add the LayoutModification.xml file to the image
Once you have created your LayoutModification.xml file to customize devices that will run Windows 10 Mobile, you can use Windows ICD to add the XML file to the device:
- In the Available customizations pane, expand Runtime settings, select Start and then click the StartLayout setting.
- In the middle pane, click Browse to open File Explorer.
- In the File Explorer window, navigate to the location where you saved your LayoutModification.xml file.
- Select the file and then click Open.
This should set the value of StartLayout. The setting appears in the Selected customizations pane.
Related topics
- Manage Windows 10 Start layout options
- Configure Windows 10 taskbar
- Customize Windows 10 Start and taskbar with Group Policy
- Customize Windows 10 Start and taskbar with ICD and provisioning packages
- Customize Windows 10 Start with mobile device management (MDM)
- Changes to Group Policy settings for Windows 10 Start
- Start layout XML for desktop editions of Windows 10 (reference) | https://docs.microsoft.com/en-us/windows/configuration/mobile-devices/start-layout-xml-mobile | 2017-12-11T02:08:08 | CC-MAIN-2017-51 | 1512948512054.0 | [array(['../images/mobile-start-layout.png',
'Start layout for Windows 10 Mobile'], dtype=object)] | docs.microsoft.com |
Troubleshooting - SnapProtect for Microsoft Hyper-V
- SnapProtect backup for online virtual machines in Hyper-V clusters fails
- SnapProtect backups for some storage arrays may fail
- Backup fails when using third party hardware VSS provider: "VSS service or writers may be in a bad state."
- While performing a restore operation, the MediaAgent does not have access to the storage device
- File-level restore from Snap fails
- Virtual machines restored from SnapProtect are not powered on automatically
- File-level restore fails
- Virtual machines are not connected to the network after a restore
- Cannot restore files from a Windows 2012 virtual machine using deduplication
- Recovering data associated with deleted clients and storage policies
- Backup goes to Pending state with error: "Unable to find OS device(s) corresponding to clone(s)."
- Backup using NetApp cluster mode LUNs fails with error: "Snapshot operation not allowed due to clones backed by snapshots"
SS0014: SnapProtect backup for online virtual machines in Hyper-V clusters fails
Resolution
You can resolve this issue by temporarily suspending the virtual machine during the SnapProtect operation.
SS0015: SnapProtect backups for some storage arrays may fail
Resolution
You can resolve this issue by making sure that before running a backup, you need to have a proxy machine with a MediaAgent installed on it. Also, the proxy machine should have a small sized LUN provisioned from the same file server where the snapshots are created.
HPV0022: Backup fails when using third party hardware VSS provider: "VSS service or writers may be in a bad state."
Symptom
When you have installed a hardware VSS provider provided by a storage vendor, SnapProtect backups fail with the following error:
Error Code: [91:8]
Description: Unable to snap virtual machine [<VM_Name>]. VSS service or writers may be in a bad state. Please check vsbkp.log and Windows Event Viewer for VSS related messages. Or run 'vssadmin list writers' from command prompt to check state of the VSS writers; if the 'Microsoft Hyper-V VSS Writer' reports an error, please restart the 'Virtual Machine Management Service' (vmms.exe).
Cause
The filer's hardware VSS provider might not be able to create a shadow copy for the snapshot.
Resolution
Configure the VSSProvider additional setting to use the software VSS provider from Microsoft:
- To find the ID, run the following command on the client computer:
vssadmin list providers
- Copy the provider ID for the software VSS provider from Microsoft.
An example of a provider ID is 400a2ff4-5eb1-44b0-8a05-1fcac0bcf9ff.
- From the CommCell Browser, navigate to Client Computers.
- Right-click the client and select Properties.
- Click Advanced, then click the Additional Settings tab.
- Click Add.
- On the Add Additional Settings dialog box, provide the following information:
- Name: Type VSSProvider.
- The Category field is automatically set to VirtualServer and the Type field is set to STRING.
- In the Value file, enter the VSS Provider ID for Microsoft.
- Click OK to save additional settings, advanced settings, and client properties.
Your backup operations will use the VSS Provider from Microsoft.
SS0016: While performing a restore operation, the MediaAgent does not have access to the storage device
Resolution
If the storage policy uses a MediaAgent that does not have access to the storage device where the snapshot was created, additional steps are required while selecting the options in the Restore Options for all selected items window.
- Click on the Advanced button.
- From the Advanced Restore Options window, click the Data Path tab.
- Select a proxy from the Use Proxy dropdown to mount the snapshot.
- Click OK.
SS0017: File-level restore from Snap fails
Symptom
The restore operation fails when you are restoring files or folders from a snapshot.
Cause
The restore fails if the Hyper-V role is not enabled in the MediaAgent computer.
Resolution
In this scenario, you can use following procedure to restore files and folders from a disk level backup:
- Enable Hyper-V role on the MediaAgent computer and perform the restore operation.
- Using advanced browse option, select another MediaAgent computer with Hyper-V role enabled on it. It could possibly be the computer where the snap backups were performed or the destination computer specified during the restore where the MediaAgent is available with the Hyper-V role enabled.
SS0018: Virtual machines restored from SnapProtect are not powered on automatically
Cause
The virtual machine might have been in a running state during the SnapProtect backup. Consequently, the virtual machine is restored in a saved state.
Resolution
To resolve this issue:
- Right-click the virtual machine in the Hyper-V Manager.
- Click Delete Saved State.
SS0019: File-level restore fails
Symptom
The restore operation fails when you are restoring files or folders from a Disk Level backup.
Cause
The restore fails if the Enable Granular Recovery option is not selected before performing the backup or the Granular Recovery operation fails.
Resolution
In such scenario, you can use following procedure to restore files and folders from a disk level backup:
- Mount the snapshot that contains the data which you want to restore. For more information, refer to Mount Snapshots.
- Browse the Destination Path which you selected while mounting the snapshot and locate the VHD file for the disk which contains the required files and folder.
- Use the DiskManager to mount the VHD file on any Windows server. A new drive will be created on the Windows server.
- Browse the files and folder on this drive and copy the required files and folders to a desired destination.
HPV0011: Virtual machines are not connected to the network after a restore
Symptom
When a Hyper-V virtual machine is restored to a different Hyper-V host (an out-of-place restore), the restored virtual machine has no network device. NICs are not automatically restored with the corresponding virtual machine.
Possible.
HPV0021: Cannot restore files from a Windows 2012 virtual machine using deduplication
Symptom
When restoring from a backup of a Windows 2012 virtual machine that has deduplication enabled, a file-level restore completes successfully but only creates stub files.
Cause
Windows 2012.
HPV0023: Recovering data associated with deleted clients and storage policies
Symptom
In a disaster recovery scenario, use the following procedure to recover data associated with the following entities:
- Deleted storage policy
- Deleted client, agent, backup set or instance
Before You Begin
This procedure can be performed when the following are available:
- You have a Disaster Recovery Backup that contains information on the entity that you are trying to restore. For example, if you wish to recover a storage policy (and the data associated with the storage policy) that was accidentally deleted, you must have a copy of the disaster recovery backup that was performed before deleting the storage policy.
- Media containing the data you wish to recover is available and not overwritten.
- If a CommCell Migration license was available in the CommServe when the disaster recovery backup was performed, no additional licenses are required. If not, obtain the following licenses:
- IP Address Change license
- CommCell Migration license
See License Administration for more details.
- A standby computer, which is used temporarily to build a CommServe.
Resolution
Recovering Deleted Data
- Locate the latest Disaster Recovery Backup that contains the information on the entity (storage policy, client, agent, backup set or instance) you are trying to restore.
- Check the Phase 1 destination for the DR Set or use Restore by Jobs for CommServe DR Data to restore the data.
- If the job was pruned and you know the media containing the Disaster Recovery Backup, you can move the media in the Overwrite Protect Media Pool. See Accessing Aged Data for more information. You can then restore the appropriate DR Set associated with the job as described in Restore by Jobs for CommServe DR Data.
- If the job is pruned and you do not know the media containing the Disaster Recovery Backup, you can do one of the following:
- If you regularly run and have copies of the Data on Media and Aging Forecast report, you can check them to see if the appropriate media is available.
- If you do not have an appropriate report, and know the media that contains the DR Backup, catalog the media using Media Explorer. Once the cataloging process is completed, details of the data available in the media are displayed.
- On a standby computer, install the CommServe software. For more information on installing the CommServe, see Install the CommServe.
- Restore the CommServe database using the CommServe Disaster Recovery Tool from the Disaster Recovery Backup described in Step 1. (See CommServe Disaster Recovery Tool for step-by-step instructions.)
- Verify and ensure that the NetApp Client Event Manager NetApp Communications Service (EvMgrS) is running.
- If you did not have a CommCell Migration license available in the CommServe when the disaster recovery backup was performed, apply the IP Address Change license and the CommCell Migration license on the standby CommServe. See Activate Licenses for step-by-step instructions.
- Export the data associated with the affected clients from the standby CommServe as described in Export Data from the Source CommCell.
When you start the Command Line Interface to capture data, use the name of the standby CommServe in the -commcell argument.
- Import the exported data to the main CommServe as described in Import Data on the Destination CommCell.
This brings back the entity in the CommServe database and the entity is visible in the CommCell Browser. (Press F5 to refresh the CommCell Browser if the entity is not displayed after a successful merge.)
- You can now browse and restore the data from the appropriate entity.
As a precaution, mark media (tape media) associated with the source CommCell as READ ONLY before performing a data recovery operation in the destination CommCell.
HPV0025: Backup goes to Pending state with error: "Unable to find OS device(s) corresponding to clone(s)."
Symptom
When performing an SnapProtect backup, the job goes pending and the following error message is displayed:
Error Code: [32:392] Unable to find OS device(s) corresponding to clone(s). Possible reasons could be 1) Improper FC/iSCSI H/W connection between the host and the array 2) The OS didn't find the device corresponding to the clone. : [Unable to find OS device(s) corresponding to snap(s)/clone(s)]
Cause
If a Fibre Channel (FC) adapter is found on a Windows or UNIX client, FC is used by default when attempting to mount snapshots. If the array host is configured to use iSCSI and the storage array is not configured with an FC port, the storage array may still create FC host groups even though FC connectivity is not available.
Resolution
To use iSCSI even when a Fibre Channel adapter is detected, set the sSNAP_IsISCSI additional setting to Y for the Virtual Server Agent. This setting forces iSCSI usage, enabling device discovery to succeed.
HPV0026: Backup using NetApp cluster mode LUNs fails with error: "Snapshot operation not allowed due to clones backed by snapshots"
Symptom
When performing an SnapProtect backup for a Microsoft Hyper-V virtual machine using NetApp cluster mode LUNs, the backup fails with the following error:
The provider was unable to perform the request at this time. The job will go to pending state and will be retried.
The cvma.log file contains the following error message:
Snapshot operation not allowed due to clones backed by snapshots.
Cause
This failure happens during a clone operation, typically of a large LUN. The backup attempt cannot continue until the clone operation is completed, and must be retried after an appropriate interval (for example, 60 seconds).
Resolution
You can configure additional settings to set the number of times the backup operation is retried (snapshot retry count) and the time interval in seconds between retries (snapshot retry interval).
Use the nONTAPSnapCreationRetryCount additional setting to increase the snapshot retry count to 30.
Use the nONTAPSnapCreationRetryIntervalInSecs additional setting to increase the snapshot retry interval to 60. | http://docs.snapprotect.com/netapp/v10/article?p=products/vs_ms/snap/troubleshooting.htm | 2017-12-11T01:52:28 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.snapprotect.com |
Can I import my Assets?
Yes!
We recommend using our Import Service when you have several hundred or thousand assets to import at one time. To learn more about imports, view our dedicated article on imports.
If you are trying to create fewer than 250 assets, we often recommend using the clone asset feature on the mobile. | http://docs.inspectall.com/article/128-can-i-import-my-assets | 2017-12-11T02:23:03 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.inspectall.com |
Web App Firewall support for cluster configurations
Note:
Web App Firewall support for Striped and partially striped configurations was introduced in Citrix ADC release 11.0.
A cluster is a group of Citrix ADC appliances that are configured and managed as a single system. Each appliance in the cluster is called a node. Depending on the number of nodes the configurations are active on, cluster configurations are referred to as striped, partially striped, or spotted configurations. The Web App Firewall is fully supported in all configurations.
The two main advantages of striped and partially striped virtual server support in cluster configurations are the following:
- Session failover support—Striped and partially striped virtual server configurations support session failover. The advanced Web App Firewall security features, such as Start URL Closure and the Form Field Consistency check, maintain and use sessions during transaction processing. In ordinary high availability configurations, or in spotted cluster configurations, when the node that is processing the Web App Web App Firewall’s ability to handle multiple simultaneous requests, thereby improving the overall performance.
Security checks and signature protections can be deployed without the need for any additional cluster-specific Web App Firewall configuration. You just do the usual Web App Firewall configuration on the configuration coordinator (CCO) node for propagation to all the nodes.
Note:
The session information is replicated across multiple nodes, but not across all the nodes in the striped configuration. Therefore, failover support accommodates a limited number of simultaneous failures. If multiple nodes fail simultaneously, the Web App Firewall might lose the session information if a failure occurs before the session is replicated on another node.
HighlightsHighlights
- Web App Firewall offers scalability, high throughput, and session failover support in cluster deployments.
- All Web App Firewall security checks and signature protections are supported in all cluster configurations.
- Character-Maps are not yet supported for a cluster. The learning engine recommends Field-Types in learned rules for the Field Format security check.
- Stats and learned rules are aggregated from all the nodes in a cluster.
- Distributed Hash Table (DHT) provides the caching of the session and offers the ability to replicate session information across multiple nodes. When a request comes to the virtual server, the Citrix ADC appliance creates Web App Firewall sessions in the DHT, and can also retrieve the session information from the DHT.
- Clustering is licensed with the Enterprise and Platinum licenses. This feature is not available with the Standard license. | https://docs.citrix.com/en-us/citrix-adc/13/application-firewall/appendixes/cluster-configuration.html | 2019-05-19T11:57:12 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.citrix.com |
EELib
That contains ready made symbols for a wide range of components and which can be simulated.
Many of these components have optional US and EU style symbols, we split them, so you can select those you like. Click on the drop down list or right click to popup the context menu, it contains many packages or parameters. EasyEDA will remember your choices for the next time.
Don’t forget to use Filter to locate a component fast. For example, you just need to type
res to find all of resistors:
2) Libraries
or press the hotkey combination
Shift+F.
then you will see a dialog as shown in the image below.
The more information please refer next “Libraries” section.
Find Components In the Schematic
Finding individual components in a dense schematic can be very time consuming. EasyEDA has an easy way to find and jump to components:
Topbar > View > Find…
(or
Ctrl+F)
Note: You have to click OK in this dialog or use the Enter key.
This feature will find, highlight and center in the window, parts by their Prefix (or reference designator). However, it cannot be used to find net names or other text in a schematic.
This is where the Design Manager comes in.
Left Navigation Panel > Design Manager, or use hotkey
ctrl+D.
The Schematic Design Manager is a very powerful tool for finding components, nets and pins.
Clicking on a Component item highlights the component and pans it to the center of the window.
Clicking on a Part pins item brings up a temporary pointer:
Placing Components
Find the component which you plan to place to your schematic at “Libraries”, then move your mouse to the canvas and left click. If you want to add more, just left click again. To end the current sequence of placements, right click once or press
ESC.
Don’t try to Drag and Drop a component to the canvas: EasyEDA team thinks that Click-Click to place components will be easier to use than a Click-Drag mode.
Multi-part Components
The number of pins on some components can be quite large. That’s why it’s easier to divide such a.
Schematic Library Wizard
How many times have you hit a schematic capture roadblock because you couldn’t find a component symbol?
Well, in EasyEDA that would be never because the Schematic Library Wizard provides a quick and easy way to create a general schematic library symbol.
The Schematic Library Wizard… command can be found in the top toolbar.
Or Edit > Schematic Library Wizard in a new schematic lib document.
The professional function please refer at Schematic Library Wizard | https://docs.easyeda.com/en/Schematic/SchLib-Search-And-Place/index.html | 2019-05-19T11:20:25 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['https://image.easyeda.com/images/045_Schematic_EElib.png', None],
dtype=object)
array(['https://image.easyeda.com/images/046_Schematic_EElib-res.png',
None], dtype=object)
array(['https://image.easyeda.com/images/026_Introduction_Parts.png',
None], dtype=object)
array(['https://image.easyeda.com/images/022_Introduction_FindComponent.png',
None], dtype=object)
array(['https://image.easyeda.com/images/023_Introduction_DesignManagerFindComponent.png',
None], dtype=object)
array(['https://image.easyeda.com/images/024_Introduction_DesignManagerClickComponentNet.png',
None], dtype=object)
array(['https://image.easyeda.com/images/059_Schematic_Mutil-Components.png',
None], dtype=object)
array(['https://image.easyeda.com/images/068_Schematic_LibWizar.png',
None], dtype=object) ] | docs.easyeda.com |
What is baseline protection?
In the last year, identity attacks have increased by 300%. To protect your environment from the ever-increasing attacks, Azure Active Directory (Azure AD) introduces a new feature called baseline protection. Baseline protection is a set of predefined conditional access policies. The goal of these policies is to ensure that you have at least the baseline level of security enabled in all editions of Azure AD.
This article provides you with an overview of baseline protection in Azure Active Directory.
Require MFA for admins
Users with access to privileged accounts have unrestricted access to your environment. Due to the power these accounts have, you should treat them with special care. One common method to improve the protection of privileged accounts is to require a stronger form of account verification when they are used to sign-in. In Azure Active Directory, you can get a stronger account verification by requiring multi-factor authentication (MFA).
Require MFA for admins is a baseline policy that requires MFA for the following directory roles:
- Global administrator
- SharePoint administrator
- Exchange administrator
- Conditional access administrator
- Security administrator
- Helpdesk administrator / Password administrator
- Billing administrator
- User administrator
This baseline policy provides you with the option to exclude users. You might want to exclude one emergency-access administrative account to ensure you are not locked out of the tenant.
Enable a baseline policy
To enable a baseline policy:
Sign in to the Azure portal as global administrator, security administrator, or conditional access administrator.
In the Azure portal, on the left navbar, click Azure Active Directory.
On the Azure Active Directory page, in the Security section, click Conditional access.
In the list of policies, click a policy that starts with Baseline policy:.
To enable the policy, click Use policy immediately.
Click Save.
What you should know identities for Azure resources or service principals with certificates. As a temporary workaround, you can exclude specific user accounts from the baseline policy.
Baseline policies apply to legacy authentication flows like POP, IMAP, older Office desktop client.
Next steps
For more information, see:
Was this page helpful?
Thank you for your feedback.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/baseline-protection?WT.mc_id=docs-azuredevtips-micrum | 2019-05-19T10:30:02 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['media/baseline-protection/01.png', 'Azure Active Directory'],
dtype=object) ] | docs.microsoft.com |
article explains two methods you can use to extend the Visual Studio build process:
Overriding specific predefined targets defined in Microsoft.Common.targets.
Overriding the "DependsOn" properties defined in Microsoft.Common.targets.
Override predefined targets
The Microsoft.Common.targets file contains a set of predefined empty targets that overriding the predefined targets,:
<Project> ... <Target Name="BeforeBuild"> <!-- Insert tasks to run before build here --> </Target> <Target Name="AfterBuild"> <!-- Insert tasks to run after build here --> </Target> </Project>
Build the project file.
The following table shows all of the targets in Microsoft.Common.targets that you can safely override.
Override:
>.
Commonly overridden DependsOn properties
See also
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/visualstudio/msbuild/how-to-extend-the-visual-studio-build-process?view=vs-2019 | 2019-05-19T11:25:20 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.microsoft.com |
Distance to the focus point for depth of field
Use this object to define the depth of field focal point
Apparent size of the Camera object in the 3D View
Perspective Camera lens value in millimeters
Unit to edit lens in for the user interface
Orthographic Camera scale (similar to zoom)
Opacity (alpha) of the darkened overlay in Camera view | https://docs.blender.org/api/blender_python_api_2_63_8/bpy.types.Camera.html | 2019-05-19T10:17:36 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.blender.org |
Tip: 6 Best Practices for Physical Servers Hosting Hyper-V Roles
Before setting up a physical server to host the Hyper-V role, download, read, and understand information included in the white paper “Performance Tuning Guidelines for Windows Server 2008”. Three sections in this white paper that can have a significant impact on the performance of the physical server discuss tuning the server hardware and setting up the networking and storage subsystems. These are especially critical for Hyper-V because the hypervisor itself sits on top of the hardware layer as described earlier and controls all hardware in Windows Server 2008. The operating system itself essentially runs in a virtual machine, better known as the Parent Partition.
Here are six best practices for physical servers hosting the Hyper-V role.
Avoid Overloading the Server.
Ensure High-Speed Access to Storage.
Install Multiple Network Interface Cards.
Configure Antivirus Software to Bypass Hyper-V Processes and Directories.
Avoid Storing System Files on Drives Used for Hyper-V Storage
Do not store any system files (Pagefile.sys) on drives dedicated to storing virtual machine data.
Monitor Performance to Optimize and Manage Server Loading.. | https://docs.microsoft.com/en-us/previous-versions/technet-magazine/dd744830(v=msdn.10) | 2019-05-19T10:27:41 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.microsoft.com |
Click Presentations in the sidebar..
Export sessions to Excel
You can export the filtered sessions to Excel CSV. Adjust your filter settings, and then click Download to Excel..
Show a specific session
Click a session to see more details, including the individual events recorded by the app.
Show a specific event
Click an event to see its details. If it was a form submission, you can also see the captured form data.
Forms
If your developer used the SDK and adds the forms.json file, you'll be able to explore form submissions in more detail.
Click an event to see the form data it recorded.
Receive form submissions in weekly emails
We can send your form submissions on a regular basis. Email us with the name of the presentation, the form name(s), and when you want to receive the email.
Export events and forms using webhooks
Mobile Locker offers webhooks, which are a way of automatically sending data in realtime from one system to another. Read the developer documentation for instructions. | https://docs.mobilelocker.com/docs/presentation-reports-and-analytics | 2019-05-19T10:28:09 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.mobilelocker.com |
This document is placed in the public domain.
Abstract
This document can be considered a companion to the tutorial. It shows how to use Python, and even more importantly, how not to use Python.
Language Constructs You Should Not Use¶
While Python has relatively few gotchas compared to other languages, it still has some constructs which are only useful in corner cases, or are plain dangerous.
from module import *¶
Inside Function Definitions¶ were
local and which were global. In Python 2.1 this construct causes warnings, and
sometimes even errors.
At Module Level¶ questions a
builtin.
When It Is Just Fine¶
There are situations in which
from module import * is just fine:
- The interactive prompt. For example,
from math import *makes Python an amazing scientific calculator.
- When extending a module in C with a module in Python.
- When the module advertises itself as
from import *safe.
Unadorned
exec,
execfile() and friends¶()
from module import name1, name2¶
This is a “don’t” which is much weaker than the previous “don’t”s but is still something you should not do if you don’t have good reasons to do that. The reason it is usually a
except:¶¶.
The best version of this function uses the
open() call as a context
manager, which will ensure that the file gets closed as soon as the
function returns:
def get_status(file): with open(file) as fp: return fp.readline()
Using the Batteries¶.
Using Backslash to Continue Statements¶)) | http://docs.activestate.com/activepython/2.7/python/howto/doanddont.html | 2019-05-19T11:25:22 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.activestate.com |
Terms to describe which data in a given entity model can be aggregated, and how. This structured type or entity container supports the $apply system query option The context-defining properties need either be part of the result entities, or be restricted to a single value by a pre-filter operation. Examples are postal codes within a country, or monetary amounts whose context is the unit of currency. | http://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs02/vocabularies/Org.OData.Aggregation.V1.xml | 2019-05-19T10:36:19 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.oasis-open.org |
Lumberyard Release Notes – Beta 1.17 (December 2018)
Lumberyard Beta 1.17 adds over 70.17.
Topics
Identify Slice Overrides and Hierarchies in the Entity Outliner
With Lumberyard 1.17, see the following workflow and UI improvements when working with slices.
- Slices with overrides appear orange
When you make changes to your slices, the entity icon appears orange if the slice has an override in the Entity Outliner. The entity icon for a parent entity has a dot if a child entity has an override. This feature helps you identify which entities and its children have overrides.
- Slice roots appear in the Entity Outliner
You can now identify if a slice is a root slice. Entities that are shaded are root slices. This feature helps you identify the structure of your slice hierarchy and understand which slices inherit.
- Moving an entity from a slice hierarchy
You can now select and drag an entity from a slice hierarchy to create a standalone entity. You can also select and drag from one slice hierarchy to another. This adds an override for the slice that you moved from and the slice that you moved to. When you save the override, this update adds the entity to the new slice hierarchy and removes the entity from the previous hierarchy.
For more information, see Working with Slices in the Amazon Lumberyard User Guide.
Set Entities as Editor Only in the Entity Inspector
With Lumberyard 1.17, you can specify entities as editor only. This feature is useful if you want to disable an entity during gameplay mode or you want to create test entities or visual comments for users during development. Entities specified as editor only will not appear in gameplay.

You can also select an entity in the viewport and see whether it's inactive or editor only.
Example Start inactive

Example Editor only

For more information, see the Entity Inspector in the Amazon Lumberyard User Guide.
Sort Entities in the Entity Outliner
With Lumberyard 1.17, you can now click the filter icon and search for components. Entities that match your filters appear in the search results.

You can also click the sort icon and sort entities with the following options:

Sort: Manually – Manually organize.
For more information, see the Entity Outliner in the Amazon Lumberyard User Guide.
Find Entities in Lumberyard Editor
You can find entities more easily in Lumberyard Editor. This is helpful when you have many entities in your level and you want to navigate quickly to a specific entity.
- Find an entity from the Entity Outliner in the Asset Browser
In the Entity Outliner, right-click the slice or slice entity and choose Find slice in Asset Browser. The Asset Browser navigates to the corresponding slice.
For more information, see Asset Browser in the Amazon Lumberyard User Guide.
You can also select an entity in the Entity Outliner to find it in the viewport or in reverse.
- Find an entity from the Entity Outliner in the viewport
In the Entity Outliner, right-click an entity and choose Find in viewport. The viewport navigates to the corresponding entity.
- Find an entity from the viewport in the Entity Outliner
In the viewport, right-click a slice or entity and choose Find in Entity Outliner. The Entity Outliner navigates to the corresponding item.
New Tutorial: Slice Update Walkthrough
Watch the following video tutorial to learn about the slice updates in Lumberyard 1.17. | https://docs.aws.amazon.com/lumberyard/latest/releasenotes/lumberyard-v1.17.html | 2019-05-19T11:01:01 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['images/shared-working_with_slices.png',
'Managing slices in the Entity Outliner.'], dtype=object)] | docs.aws.amazon.com |
Testcase to check if C_Order filtering in M_ShipmentSchedule is working correctly for different Orgs.
Make sure that Autocomplete is enabled for C_OrderLine_ID, in M_ShipmentSchedule (=> SysAdmin, Org *)
In db, set the document no.s for both orders to the same number; also make sure this doc no. is not anywhere else
Log in with a role that has only access to Org I, open M_ShipmentSchedule
Log in with a role that has only access to Org II, open M_ShipmentSchedule | http://docs.metasfresh.org/tests_collection/testcases/Testcase_%231353.html | 2019-05-19T11:08:39 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.metasfresh.org |
Instrumenting Calls to a PostgreSQL Database
The
application-pgsql.properties file adds the X-Ray PostgreSQL
tracing interceptor to the data source created in
RdsWebConfig.java.
Example
application-pgsql.properties – PostgreSQL Database
Instrumentation
spring.datasource.continue-on-error=true spring.jpa.show-sql=false spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.jdbc-interceptors=com.amazonaws.xray.sql.postgres.TracingInterceptorspring.jpa.database-platform=org.hibernate.dialect.PostgreSQL94Dialect
Note
See Configuring Databases with Elastic Beanstalk in the AWS Elastic Beanstalk Developer Guide for details on how to add a PostgreSQL database to the application environment.
The X-Ray demo page in the
xray branch includes a demo that uses the
instrumented data source to generate traces that show information about the SQL queries
that
it generates. Navigate to the
/#/xray path in the running application or choose
Powered by AWS X-Ray in the navigation bar to see the demo
page.
Choose Trace SQL queries to simulate game sessions and store the
results in the attached database. Then, choose View traces in AWS X-Ray
to see a filtered list of traces that hit the API's
/api/history route.
Choose one of the traces from the list to see the timeline, including the SQL query.
| https://docs.aws.amazon.com/xray/latest/devguide/scorekeep-postgresql.html | 2019-05-19T11:09:14 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['images/scorekeep-demo.png', None], dtype=object)
array(['images/scorekeep-trace-sql.png', None], dtype=object)] | docs.aws.amazon.com |
Command line arguments and environment strings
Scripts are much more useful if they can be called with different values in the command line.
For instance, a script that extracts a particular value from a file could be written so that it prompts for a file name, reads the file name, and then extracts the data. Or, it could be written to loop through as many files as are in the command line, and extract the data from each file, and print the file name and data.
The second method of writing the program can easily be used from other scripts. This makes it more useful.
The number of command line arguments to a Tcl script is passed
as the global variable
argc . The name
of a Tcl script is passed to the script as the global variable
argv0 , and the rest of the command
line arguments are passed as a list in
argv. The name of the executable that runs the
script, such as
tclsh is given by the
command
info nameofexecutable
Another method of passing information to a script is with
environment variables. For instance, suppose you
are writing a program in which a user provides some sort of comment
to go into a record. It would be friendly to allow the user to edit
their comments in their favorite editor. If the user has defined an
EDITOR environment variable, then you
can invoke that editor for them to use.
Environment variables are available to Tcl scripts in a global
associative array
env . The index into
env is the name of the environment
variable. The command
puts "$env(PATH)" would
print the contents of the
PATH
environment variable.
Example
puts "There are $argc arguments to this script" puts "The name of this script is $argv0" if {$argc > 0} {puts "The other arguments are: $argv" } puts "You have these environment variables set:" foreach index [array names env] { puts "$index: $env($index)" } | http://docs.activestate.com/activetcl/8.5/tcl/tcltutorial/Tcl38.html | 2019-05-19T10:43:55 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.activestate.com |
Using Anaconda with Cloudera CDH¶
There are different methods of using Anaconda Scale on a cluster with Cloudera CDH:
- The freely available Anaconda parcel for Cloudera CDH.
- Custom Anaconda parcels for Cloudera CDH
- A dynamic, managed version of Anaconda on all of the nodes using Anaconda Scale
The freely available Anaconda parcel is based on Python 2.7 and includes the default conda packages that are available in the free Anaconda distribution.
Anaconda Enterprise users can also leverage Anaconda Repository to create and distribute their own custom Anaconda parcels for Cloudera Manager.
If you need more dynamic functionality than the Anaconda parcels offer, Anaconda Scale lets you dynamically install and manage multiple conda environments–such as Python 2, Python 3, and R environments–and packages across a cluster.
Using the Anaconda parcel¶
For more information about installing the Anaconda parcel on a CDH cluster using Cloudera Manager, see the Anaconda parcel documentation.
Transitioning to the dynamic, managed version of Anaconda Scale¶
To transition from the Anaconda parcel for CDH to the dynamic, managed version of Anaconda Scale, follow the instructions below to uninstall the Anaconda parcel on a CDH cluster and then transition to a centrally managed version of Anaconda.
Uninstalling the Anaconda parcel¶
If the Anaconda parcel is installed on the CDH cluster, uninstall the parcel:
- From the Cloudera Manager Admin Console, in the top navigation bar, click the Parcels indicator.
- To the right of the Anaconda parcel listing, click the Deactivate button.
- When prompted, click OK to deactivate the Anaconda parcel and restart Spark and related services.
- Click the arrow to the right of the Anaconda parcel listing and select Remove From Hosts.
- In the confirmation dialog box, confirm removal of the Anaconda parcel from the cluster nodes.
For more information about managing Cloudera parcels, see the Cloudera documentation.
Transitioning to a centrally managed Anaconda installation¶
Once you’ve uninstalled the Anaconda parcel, see the Anaconda Scale installation instructions for more information about installing a centrally managed version of Anaconda. | https://docs.anaconda.com/anaconda-scale/cloudera-cdh/ | 2019-02-16T06:24:48 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.anaconda.com |
Contents
Place and Media
Purpose: To provide information about customizable commands which modify the media of a given place.
Learn about Place and Media CommandsThe commands presented in this page enable your application to manage the agent activity (login, ready, not ready, log off) on the media of a given place. Commands for the media of type Open Media apply to Chat, E-Mail, and Work Item media.
ImportantRead Use Customizable Commands to see code snippets which demonstrate the usage of the commands.
Manage Open Media
The following commands let you manage media of type open media, including work items, chat and e-mail (which inherit from open media):
- Change the media status (ready, not ready, login, log off);
- Activate or deactivate Do Not Disturb (DND) features;
- Modify the state reason for a given media.
ImportantThese commands do not apply to DNs (voice media).
Manage SMS Media
You can also create new outbound sms through the following commands:
Manage E-Mail Media
In addition to the Open Media commands, you can also create new outbound e-mails through the following commands:
Manage Voice Media
The following commands apply to the voice media only. The voice media is composed of Directory Numbers (DNs) available on the underlying switches. Through the below commands, you can:
- Change the media status (ready, not ready, login, log off);
- Activate or deactivate the Do Not Disturb (DND) features;
- Start a new call;
- Manage a new Instant Messaging session.
This page was last modified on January 16, 2017, at 06:23.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/IW/8.5.1/Developer/PlaceandMedia | 2019-02-16T06:20:05 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.genesys.com |
.
Prerequisites
For information about the levels at which you can perform this procedure, and the modules, handlers, and permissions that are required to perform this procedure, see Server Certificates Feature Requirements (IIS 7).
Exceptions to Feature Requirements
- None
To create a self-signed certificate
You can perform this procedure by using the user interface (UI). Self-Signed Certificate.
On the Create Self-Signed Certificate page, type a friendly name for the certificate in the Specify a friendly name for the certificate box, and then click OK.
Command Line
None
Configuration
None
WMI Server Certificates in IIS 7 | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753127(v=ws.10) | 2019-02-16T05:56:08 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
Guides
- Fastly Status
Setting up image optimization
Last updated June 29, 2018
WARNING: Only send image content through the Fastly Image Optimizer. Non-image content can't be optimized by the Image Optimizer but will still be counted and charged as an Image Optimizer request, which may cost you more.
To use the Fastly Image Optimizer, start by contacting sales to request access. Be sure to include the Service ID of the service for which image optimization should be enabled. Then, set up image optimization by following the steps below.
Add the Fastly Image Optimizer header
Once image optimization has been activated on your service ID and confirmed via email, configure your service by adding the Fastly Image Optimizer header page appears.
- Fill out the Create a header window as follows:
- In the Name field, type
Fastly Image Optimizer.
- From the Type menu, select Request, and from the Action menu, select Set.
- In the Destination field, type
http.x-fastly-imageopto-api.
- In the Source field, type
"fastly". By default, the Fastly Image Optimizer removes any additional query string parameters that are not part of our image API. If your source image requires delivery of additional query string parameters from origin then type
"fastly; qp=*"instead.
- From the Ignore if set menu, select No.
- In the Priority field, type
1.
- Click Create to create the new header.
TIP:, type a descriptive name for the new condition (for example,
Fastly Image Optimizer Request Condition).
- In the Apply if field, type.
TIP: For more help using conditions, see our guide.
Enable shielding
To reduce cache miss latency and ensure long-lived connections, you must enable shielding for your origin. The shield location should be as geographically close to your image's origin as possible.
Our guide to enabling shielding provides more information on how to set this up. Take special note of the step immediately following your shielding location selection in that guide. If the Host header for the service has been changed from the default, you must ensure the new hostname is added to the list of domains.
Confirm everything is working
Once you've activated your changes, check to see if the Fastly Image Optimizer is processing your image request by typing the following command on the command line:
echo -n "Image Width: " ; curl -sI | grep "Fastly-Io-Info:" | cut -d' ' -f6 | cut -d= -f2 | cut -dx -f1
Replace with the full image URL and width of the image you're testing.
The command line output will display the image's width, which should match the width API parameter you added to your image. For example, the output might be:
Image Width: 200
Review and edit the default image settings
Fastly applies specific image optimization settings to all images by default.
Changing default image settings in the web interface
The Fastly web interface provides the easiest way to review the default optimization settings in a single location. You can use the web interface to make changes to these settings as well. Changes to other image settings, however, including most image transformations, require issuing API calls.
To review and edit the default image settings via the web interface, follow the steps below:
- Log in to the Fastly web interface and click the Configure link.
- From the service menu, select the appropriate service.
- Click the Configuration button and then select Clone active. The Domains page appears.
Click the type type.
Changing image settings other than the defaults via API calls
The Fastly web interface only allows you to change the most basic settings of image optimization and transformation. For more complex changes to settings beyond these defaults, you must change your existing image URLs by adding Fastly API query string parameters to them. For example, if your image source existed at, you would need to add
?<PARAMETER=VALUE> to create the proper query string structure for Fastly to transform the image in some way.
You can change existing URLs in the source by adding one or more Fastly URL API query string parameters directly to your site’s HTML. You can also change them programmatically. For more information about how to do this, see our guides and API documentation as follows:
- Our image optimization examples demonstrate some of the most common image transformations you can add to your URLs. These examples perform transformations and optimizations on our so you can see exactly how they work before you change your image URLs.
- Our guide to serving images provides additional details you should know before you start adding Fastly image transformation URL API query string parameters to your existing image URLs. It specifically discusses the transformation order of parameters when you specify more than one parameter at a time (e.g.,
?<PARAMETER1=VALUE&PARAMETER2=VALUE>).
- Our Fastly Image Optimizer API describes each of the available image transformations in detail and includes the exact API pattern you can add to URLs, along with a description and example of how to use each parameter and its values. | https://docs.fastly.com/guides/imageopto-setup-use/setting-up-image-optimization | 2019-02-16T06:16:41 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.fastly.com |
Copying Decision Tables
You can copy a decision table and paste that copy in the same rule package, either on the same or a different node. Follow these steps to copy a decision table:
- Navigate to the rule package to which the decision table decision table in the list,.ImportantIf you wish to move the rule to another location, first copy, then paste, then go back and delete the original. The system will not allow you to paste a rule after it has been deleted from the repository.
- Update the information as needed and click Save. Refer to Creating Decision Tables for details about the fields that can be updated modified on September 25, 2014, at 00:12.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GRS/8.5.0/GRATHelp/CopyingDT | 2019-02-16T05:37:22 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.genesys.com |
Code¶
Plugins¶
Plugins are supported in several places: Event Processing and the REST api.
Event Processing¶
The front-end event processing portion of MozDef supports python plugins to allow customization of the input chain. Plugins are simple python modules than can register for events with a priority, so they only see events with certain dictionary items/values and will get them in a predefined order.
To create a plugin, make a python class that presents a registration dictionary and a priority as follows:
class message(object): def __init__(self): '''register our criteria for being passed a message as a list of lower case strings or values to match with an event's dictionary of keys or values set the priority if you have a preference for order of plugins to run. 0 goes first, 100 is assumed/default if not sent ''' self.registration = ['sourceipaddress', 'destinationipaddress'] self.priority = 20
Message Processing¶
To process a message, define an onMessage function within your class as follows:
def onMessage(self, message, metadata): #do something interesting with the message or metadata return (message, metadata)
The plugin will receive a copy of the incoming event as a python dictionary in the ‘message’ variable. The plugin can do whatever it wants with this dictionary and return it to MozDef. Plugins will be called in priority order 0 to 100 if the incoming event matches their registration criteria. i.e. If you register for sourceipaddress you will only get events containing the sourceipaddress field.
If you return the message as None (i.e. message=None) the message will be dropped and not be processed any further. If you modify the metadata the new values will be used when the message is posted to elastic search. You can use this to assign custom document types, set static document _id values, etc.
REST Plugins¶
The REST API for MozDef also supports python plugins which allow you to customize your handling of API calls to suit your environment. Plugins are simple python modules than can register for REST endpoints with a priority, so they only see calls for that endpoint and will get them in a predefined order.
To create a REST API plugin simply create a python class that presents a registration dictionary and priority as follows:
class message(object): def __init__(self): '''register our criteria for being passed a message as a list of lower case strings to match with an rest endpoint (i.e. blockip matches /blockip) set the priority if you have a preference for order of plugins 0 goes first, 100 is assumed/default if not sent Plugins will register in Meteor with attributes: name: (as below) description: (as below) priority: (as below) file: "plugins.filename" where filename.py is the plugin code. Plugin gets sent main rest options as: self.restoptions self.restoptions['configfile'] will be the .conf file used by the restapi's index.py file. ''' self.registration = ['blockip'] self.priority = 10 self.name = "Banhammer" self.description = "BGP Blackhole"
The registration is the REST endpoint for which your plugin will receive a copy of the request/response objects to use or modify. The priority allows you to order your plugins if needed so that they operate on data in a defined pattern. The name and description are passed to the Meteor UI for use in dialog boxes, etc so the user can make choices when needed to include/exclude plugins. For example the /blockip endpoint allows you to register multiple methods of blocking an IP to match your environment: firewalls, BGP tables, DNS blackholes can all be independently implemented and chosen by the user at run time.
Message Processing¶
To process a message, define an onMessage function within your class as follows:
def onMessage(self, request, response): ''' request: response: ''' response.headers['X-PLUGIN'] = self.description
It’s a good idea to add your plugin to the response headers if it acts on a message to facilitate troubleshooting. Other than that, you are free to perform whatever processing you need within the plugin being sure to return the request, response object once done:
return (request, response) | https://mozdef.readthedocs.io/en/stable/code.html | 2019-02-16T04:52:02 | CC-MAIN-2019-09 | 1550247479885.8 | [] | mozdef.readthedocs.io |
Monitoring your active deployments is key for delivering business value and staying ahead of potential problems. Skafos provides 3 primary tools for you to diagnose, investigate, and analyze the ins and outs of your Deployments:
- Logs - live and historical, queryable, analyzable
- Metrics - live and historical, system performance, model performance, training metrics
- Alerts - user-defined notifications sent via email or viewable on the Skafos Dashboard
Your Deployment generates logs for each Job. This includes both system logs, and logs defined by you with a simple Skafos SDK call from within your Job. If you want to see what’s happening in real-time, or if you’re interested in examining historical logs, use the CLI or Dashboard to investigate. On the Dashboard, we give you the ability to search for logs using specific keywords or phrases.
Skafos also tracks various system and job-specific metrics for your active Deployments:
- System Metrics
-- Resource Utilization - CPUs, Memory
-- Deployment Status
User Defined Metrics
To view custom metrics that have been reported for a specific job, navigate to the job and you'll see metric names for each type of metric that has been reported. In the case below the job has only one type of metric: "Model Loss".
Charts are initially shown in a closed state, so to see the chart populated with data, click "Show". If the job is running, the chart will periodically check & update the chart when there is new data.
If a Job fails, or something goes unexpectedly wrong, you need to be the first to know. Using the Skafos SDK, you can log special alerts that fire a notification in the Skafos Dashboard and to your email inbox. | https://docs.metismachine.io/docs/dashboard | 2019-02-16T04:51:58 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.metismachine.io |
UIElement.
Update
UIElement. Layout Update
UIElement. Layout Update
UIElement. Layout Update
Method
Layout
Definition
Ensures that all visual child elements of this element are properly updated for layout.
public: void UpdateLayout();
public void UpdateLayout ();
member this.UpdateLayout : unit -> unit
Public Sub UpdateLayout ()
Remarks.
Applies to
See also
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.updatelayout?view=netframework-4.7.2 | 2019-02-16T05:37:03 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
Starting a Task at Container Instance Launch Time
Depending on your application architecture design, you may need to run a specific container on every container instance to deal with operations or security concerns such as monitoring, security, metrics, service discovery, or logging.
To do this, you can configure your container instances to call the docker run command with the user data script at launch, or in some init system such as Upstart or systemd. While this method works, it has some disadvantages because Amazon ECS has no knowledge of the container and cannot monitor the CPU, memory, ports, or any other resources used. To ensure that Amazon ECS can properly account for all task resources, create a task definition for the container to run on your container instances. Then, use Amazon ECS to place the task at launch time with Amazon EC2 user data.
The Amazon EC2 user data script in the following procedure uses the Amazon ECS introspection API to identify the container instance. Then, it uses the AWS CLI and the start-task command to run a specified task on itself during startup.
To start a task at container instance launch time
If you have not done so already, create a task definition with the container you want to run on your container instance at launch by following the procedures in Creating a Task Definition.
Modify your
ecsInstanceRoleIAM role to add permissions for the
StartTaskAPI operation. For more information, see Amazon ECS Container Instance IAM Role.
Open the IAM console at.
In the navigation pane, choose Roles.
Choose the
ecsInstanceRole. If the role does not exist, use the procedure in Amazon ECS Container Instance IAM Role to create the role and return to this procedure. If the role does exist, select the role to view the attached policies.
In the Permissions tab, choose Add inline policy.
For Service, choose Choose a service, EC2 Container Service.
For Actions, type StartTask in the search field, and then select StartTask.
For Resources, select All resources, and then choose Review policy.
On the Review policy page, enter a name for your policy, such as
ecs-start-taskand choose Create policy.
Launch one or more container instances by following the procedure in Launching an Amazon ECS Container Instance, but in Step 7. Then, copy and paste the MIME multi-part user data script below into the User data field. Substitute
your_cluster_namewith the cluster for the container instance to register into and
my_task_defwith the task definition to run on the instance at launch.
Note
The MIME multi-part content below uses a shell script to set configuration values and install packages. It also uses an Upstart job to start the task after the ecs service is running and the introspection API is available.
Content-Type: multipart/mixed; boundary="==BOUNDARY==" MIME-Version: 1.0 --==BOUNDARY== Content-Type: text/x-shellscript; charset="us-ascii" #!/bin/bash # Specify the cluster that the container instance should register into cluster=
your_cluster_name# Write the cluster configuration variable to the ecs.config file # (add any other configuration variables here also) echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config # Install the AWS CLI and the jq JSON parser yum install -y aws-cli jq --==BOUNDARY== Content-Type: text/upstart-job; charset="us-ascii" #upstart-job description "Amazon EC2 Container Service (start task on instance boot)" author "Amazon Web Services" start on started ecs script exec 2>>/var/log/ecs/ecs-start-task.log set -x until curl -s do sleep 1 done # Grab the container instance ARN and AWS region from instance metadata instance_arn=$(curl -s | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' ) cluster=$(curl -s | jq -r '. | .Cluster' | awk -F/ '{print $NF}' ) region=$(curl -s | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}') # Specify the task definition to run at launch task_definition=
my_task_def# Run the AWS CLI start-task command to start your task on this container instance aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region end script --==BOUNDARY==--
Verify that your container instances launch into the correct cluster and that your tasks have started.
Open the Amazon ECS console at.
From the navigation bar, choose the region that your cluster is in.
In the navigation pane, choose Clusters and select the cluster that hosts your container instances.
On the Cluster page, choose Tasks.
Each container instance you launched should have your task running on it, and the container instance ARN should be in the Started By column.
If you do not see your tasks, you can log in to your container instances with SSH and check the
/var/log/ecs/ecs-start-task.logfile for debugging information. | https://docs.aws.amazon.com/AmazonECS/latest/developerguide/start_task_at_launch.html | 2019-02-16T05:36:53 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.aws.amazon.com |
Understand development and production modes
Some applications can be configured in either “development” or “production” mode:
Production mode: File permissions and configuration settings are set with security and performance in mind. Installing certain plugins, themes and updates may require manual changes or installation of additional services like FTP.
TIP: This mode is recommended if the stack will be deployed on a public server.
If you install a module or stack and select production mode, these applications will request an FTP account in order to download their extensions. If you already have an FTP server on your machine, use this mode.
Development mode: File permissions and configuration settings are not optimal from a security standpoint but make it easy to install plugins, themes and updates for certain applications.
TIP: This mode is recommended for development purposes or for use on a private company network or intranet. | https://docs.bitnami.com/virtual-machine/infrastructure/lamp/get-started/dev-prod-modes/ | 2019-02-16T06:26:32 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.bitnami.com |
diff
Name
cb-cli diff - Compares the local files with the versions stored on the ClearBlade platform System
Synopsis
cb-cli diff [-all-services] [-all-libraries] [-service = <SERVICE_NAME>] [-userschema] [-collection = <COLLECTION_NAME>] [-user = <EMAIL>] [-role = <ROLE_NAME>] [-trigger = <TRIGGER_NAME>] [-timer = <TIMER_NAME>]
Description
This command allows you to do a “diff” between an object in your current repo and the corresponding object residing in the associated remote ClearBlade system. This involves diffing the meta data for the object, and if the object is a code service or library, also performing a traditional diff on the code. For example, consider a code service. If you (locally) changed the actual code for the service, and also (locally) changed the library dependencies for the service, the diff command will report both changes.
Options
The following options are available
all-services
Diffs all the services stored in the repo
all-libraries
Diffs all of the libraries stored in the repo
service = < service_name >
Diffs the local and remote versions of
library=< library_name >
Diffs the local and remote versions of
userschema
Diffs the local and remote versions of the users table schema
collection = < collection_name >
Diffs the local and remote versions of the collections meta-data. Does not diff the items of the collection.
user = < email >
Diffs the local and remote versions of the user record. Also diffs the users roles
role = < role_name >
Diffs all the capability details of the specific role
trigger = < trigger_name >
Diffs triggers
timer = < timer_name >
Diffs timers
Example
cb-cli diff -collection=fgbfgb
Output:
< host:"smtp.gmail.com", --- > host:"mtp.gmail.com",
cb-cli diff -collection=samplecollection
_ | https://docs.clearblade.com/v/3/4-developer_reference/cli/9_Diff/ | 2019-02-16T05:27:12 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.clearblade.com |
Figure 16.217. The “GIMP Online” submenu of the Help menu
The GIMP online command displays a submenu
which lists several helpful web sites that have to do with various
aspects of GIMP. You can click on one
of the menu items and your web browser will try to connect to the URL. | https://docs.gimp.org/2.8/en/gimp-help-online.html | 2019-02-16T05:21:22 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.gimp.org |
Applies to Dynamics 365 for Customer Engagement apps version 9.x
Starting with the Dynamics 365 for Customer Engagement apps version 9.0, virtual entities enable the integration of data residing in external systems by seamlessly representing that data as entities in Dynamics 365 for Customer Engagement apps,.
This section discusses the implications of virtual entities for developers. For more information about managing virtual entities from the user interface, see Create and edit virtual entities.
Virtual entities, data providers and data sources
A virtual entity is a definition of an entity in the Dynamics 365 for Customer Engagement apps platform metadata without the associated physical tables for entity instances created in the Dynamics 365 for Customer Engagement apps database. Instead during runtime, when an entity instance is required, its state is dynamically retrieved from the associated external system. Each virtual entity type is associated with a virtual entity data provider and (optionally) some configuration information from an associated virtual entity data source.
A data provider is a particular type of Dynamics 365 plugin, which is registered against CRUD events that occur in the platform. This initial release only supports READ operations.
The following data providers ship with Dynamics 365 for Customer Engagement apps version 9.0:
- Dynamics 365 for Customer Engagement apps Dynamics 365 for Customer Engagement apps entity. This means:
- All entities in the external data source must have an associated GUID primary key.
- All entity properties must be represented as Dynamics 365 for Customer Engagement apps attributes. You can use simple types representing text, numbers, optionsets, dates, images, and lookups.
- You must be able to model any entity relationships in Dynamics 365 for Customer Engagement apps.
- An attribute on a virtual entity cannot be calculated or rollup. Any desired calculations must be done on the external side, possibly within or directed by the data provider. Dynamics 365 for Customer Engagement apps API, see API considerations of virtual entities.
Comentários
Adoraríamos saber sua opinião. Escolha o tipo que gostaria de fornecer:
Nosso sistema de comentários é criado no GitHub Issues. Saiba mais em nosso blog.
Carregando comentários... | https://docs.microsoft.com/pt-br/dynamics365/customer-engagement/developer/virtual-entities/get-started-ve | 2019-02-16T05:49:07 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
Evaluating Resources with Rules
Use AWS Config to evaluate the configuration settings of your AWS resources. You do this by creating AWS Config rules, which represent your ideal configuration settings. AWS Config provides customizable, predefined rules called managed rules to help you get started. You can also create your own custom rules. While AWS Config continuously tracks the configuration changes that occur among your resources, it checks whether these changes violate any of the conditions in your rules. If a resource violates a rule, AWS Config flags the resource and the rule as noncompliant.
For example, when an EC2 volume is created, AWS Config can evaluate the volume against a rule that requires volumes to be encrypted. If the volume is not encrypted, AWS Config flags the volume and the rule as noncompliant. AWS Config can also check all of your resources for account-wide requirements. For example, AWS Config can check whether the number of EC2 volumes in an account stays within a desired total, or whether an account uses AWS CloudTrail for logging..
By using AWS Config to evaluate your resource configurations, you can assess how well your resource configurations comply with internal practices, industry guidelines, and regulations.
For regions that support AWS Config rules, see AWS Config Regions and Endpoints in the Amazon Web Services General Reference.
You can create up to 150 AWS Config rules per region in your account. For more information, see AWS Config Limits in the Amazon Web Services General Reference.
You can also create custom rules to evaluate additional resources that AWS Config doesn't yet record. For more information, see Evaluating Additional Resource Types. | https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html | 2019-02-16T05:38:46 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.aws.amazon.com |