content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
If you wish to use features of Laboratory, you can create Users with Laboratory User. Lab Tests, Sample Collection etc. are only visible to users with this Role enabled. Read Healthcare Settings for setting up the Healthcare module. Whenever you create a new Lab Test, the Lab Test document is loaded based on the template configured for that particular test. This means, you will have to have separate templates configured for each Lab Test. Here's how you can configure various types of templates. Healthcare > Setup > Lab Test Template > New Lab Test Template After providing the Name for the Test you will have to select a Code and Item group for creating the mapped Item. ERPNext Healthcare maps every Lab Test (every other billable healthcare service) to an Item with "Maintain Stock" set to false. This way, the Accounts Module will invoice the Item and you can see the Sales related reports of Selling Module. You can also set selling rate of the Lab Test here - this will update the Selling Price List. The Standard Selling Rate field behaves similar to the Item Standard Selling Rate, updating this will not update the Selling Price List The Standard Selling Rate field behaves similar to the Item Standard Selling Rate, updating this will not update the Selling Price List The Is Billable flag in Lab Test Template creates the Item, but as Disabled. Likewise, unchecking this flag will Enable the Item. Is Billable Following are the result formats available in ERPNext Healthcare Grouped For Single and Compound result formats, you can set the normal values. You will have to select the Sample required for the test. You can also mention the quantity of sample that needs to be collected. These details will be used when creating the Sample Collection document for the Lab Test. To organize your clinic into departments, you can create multiple Medical Departments. You can select appropriate departments in Lab Test Template and will be included in the Lab Test result print. Healthcare > Setup > Medical Department > New Medical Department You can create various masters for Samples that are to be collected for a Lab Test. Healthcare > Setup > Lab Test Sample > New Lab Test Sample You can create various masters for Unit of Measures to be used in Lab Test document. Healthcare > Setup > Lab Test UOM > New Lab Test UOM You can create masters for a list of Antibiotics. Healthcare > Setup > Antibiotic > New Antibiotic You can create masters for a list of Sensitivity to various Antibiotics. Healthcare > Setup > Sensitivity > New Sensitivity Next: Setup Pharmacy ERPNext is used by more than 5000 companies across the world
https://docs.erpnext.com/docs/user/manual/en/healthcare/setup_laboratory
2020-03-28T18:39:21
CC-MAIN-2020-16
1585370492125.18
[]
docs.erpnext.com
This document explains how you enter running absence. You do this when the end of the absence is unkown, for example, when an employee goes on indefinite sick leave. A personal and indefinite leave of absence, containing the absence type and date from which it is valid, is entered in 'Absence. Enter Running' (TMS004). Absences can be reviewed in 'Absence. Clock In/Out' (TMS200). The file containing information about absences (MTMABS) is updated. Once running absence is entered, no manual absence entries must made. Instead, absence will be recorded until the next time the employee clocks in. The starting conditions listed in Reporting in Time and Attendance must be met. Start 'Absence. Enter Running' (TMS004). On the E panel, enter your/the employee's card number, the type of absence and the date from which you/the employee will be absent. Press Enter. The absence will be discontinued the next time you/the employee clocks in.
https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.manexechs_16.x/c000475.html
2020-03-28T18:27:44
CC-MAIN-2020-16
1585370492125.18
[]
docs.infor.com
Bokeh Image Node¶ The Bokeh Image node generates a special input image for use with the Bokeh Blur filter node. The Bokeh Image node is designed to create a reference image which simulates optical parameters such as aperture shape and lens distortions which have important impacts on bokeh in real cameras. Properties¶ The first three settings simulate the aperture of the camera. - Flaps Sets an integer number of blades for the cameras iris diaphragm. - Angle Gives these blades an angular offset relative to the image plane. - Rounding Sets the curvature of the blades with (0 to 1) from straight to bringing them to a perfect circle. - Catadioptric Provides a type of distortion found in mirror lenses and some telescopes. This can be useful to produce a visual complex bokeh. - Lens Shift Introduces chromatic aberration into the blur such as would be caused by a tilt-shift lens. Example¶ In the example below the Bokeh Image is used to define the shape of the bokeh for the Bokeh Blur node.
https://docs.blender.org/manual/es/dev/compositing/types/input/bokeh_image.html
2020-03-28T19:01:48
CC-MAIN-2020-16
1585370492125.18
[]
docs.blender.org
A capacity calendar contains information on available daily capacity for a work center. Capacity is calculated from information regarding capacity in the work center file in combination with the capacity percentage (working time as a proportion of the day, 0 - 100 %) as stated in the system calendar. Daily capacity can also be maintained manually. The capacity calendar is used to calculate throughput time. This provides the start time and completion time for a manufacturing order. Available capacity as set in the capacity calendar is displayed in the capacity requirements plan. Capacity calendars are also used in material requirements planning when scheduling planned orders.
https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.manexechs_16.x/czz0188.html
2020-03-28T18:04:52
CC-MAIN-2020-16
1585370492125.18
[]
docs.infor.com
. Data Pins, a set of collapsed nodes is not shared, even within a single Level Blueprint or Blueprint Class. If you copy the collapsed node, it duplicates the internal graph. This can be handy if you want to make several variants of the same approximate behavior, but it also means that any bug fixes would have to be applied to each copy. The feature is really more intended to 'tidy up' a graph, hiding complexity inside, rather than any sort of sharing or reuse.. To Expand a Collapsed Graph: Right-click on a collapsed graph node and choose Expand Node. The collapsed graph node is replaced by the nodes it contained and is no longer present in the My Blueprint tab graph hierarchy..
https://docs.unrealengine.com/en-US/Engine/Blueprints/UserGuide/Nodes/index.html
2020-03-28T18:45:47
CC-MAIN-2020-16
1585370492125.18
[array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/SelectNode.jpg', 'SelectNode.jpg'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/VarMessage.jpg', 'VarMessage.jpg'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/DotBoxSpawnEmitter.jpg', 'DotBoxSpawnEmitter.jpg'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/k2_move.jpg', 'Blueprint Moving Nodes'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/k2_pins.jpg', 'Blueprint Input and Output Pins'], dtype=object) array(['./../../../../../Images/Shared/Glossary/E/k2_pins_exec.jpg', 'Blueprint Exec Pins'], dtype=object) array(['./../../../../../Images/Shared/Glossary/D/k2_pins_data_types.jpg', 'Blueprint Data Pin Types'], dtype=object) array(['./../../../../../Images/Shared/Glossary/D/k2_pins_data.jpg', 'Blueprint Data Pins'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/k2_autocast_message.jpg', 'Blueprint - Compatible Types Message'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/k2_autocast_node.jpg', 'Blueprint - Autocast Node'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Variables/HT38.jpg', 'HT38.png'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Variables/HT40.jpg', 'HT40.png'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Variables/HT39.jpg', 'HT39.png'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/SelectNode.jpg', 'SelectNode.jpg'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/k2_flow_exec.jpg', 'k2_flow_exec.jpg'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/k2_flow_data.jpg', 'Blueprint Data Wire'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/k2_tunnel_entrance.jpg', 'Blueprint Tunnel Entrance Node'], dtype=object) array(['./../../../../../Images/Engine/Blueprints/UserGuide/Nodes/k2_tunnel_exit.jpg', 'Blueprint Tunnel Exit Node'], dtype=object) ]
docs.unrealengine.com
socket_stats_* The socket_stats_* tables store statistical metrics about socket usage for a Greenplum Database instance. There are three system tables, all having the same columns: These tables are in place for future use and are not currently populated. socket_stats_nowis an external table whose data files are stored in $MASTER_DATA_DIRECTORY/gpperfmon/data. socket_stats_tailis an external table whose data files are stored in $MASTER_DATA_DIRECTORY/gpperfmon/data. This is a transitional table for socket statistical metrics that has been cleared from socket_stats_nowbut has not yet been committed to socket_stats_history. It typically only contains a few minutes worth of data. socket_stats_historyis a regular table that stores historical socket statistical metrics. It is pre-partitioned into monthly partitions. Partitions are automatically added in two month increments as needed. Administrators must drop old partitions for the months that are no longer needed.
https://gpcc.docs.pivotal.io/320/gpcc/topics/db-socket-stats.html
2020-03-28T18:54:34
CC-MAIN-2020-16
1585370492125.18
[]
gpcc.docs.pivotal.io
What is a template? The Template controls the overall look and layout of your site and how your site would appear to people. or create your own according to Joomla! standards. Some are available without charge under various licenses, and some are for sale. In addition, there are many designers available who can make custom templates. You can also make your own template. Templates are managed with the Template Manager. You will also use the Template Manager for Switching templates. To use the Template Manager, log in to the Back-end (Administrator) of your site.
https://docs.joomla.org/What_is_a_template%3F
2017-10-17T02:12:34
CC-MAIN-2017-43
1508187820556.7
[]
docs.joomla.org
This category is for "official" specifications for Joomla. Unapproved specifications or proposals should be submitted to the dev mailing list or the Feature Requests - White Paper forum. This category has the following 6 subcategories, out of 6 total. The following 27 pages are in this category, out of 27 total.
https://docs.joomla.org/index.php?title=Category:Specifications&oldid=63221
2015-02-27T06:39:05
CC-MAIN-2015-11
1424936460576.24
[]
docs.joomla.org
Greeter { def name Greeter(who) { name = who[0].toUpperCase() + who[1..-1] } def salute() { println "Hello ${name}!" } } g = new Greeter('world') // create object g.salute() // Output "Hello World!" Leveraging existing Java libraries: import org.apache.commons.lang.WordUtils class Greeter { def name Greeter(who) { name = WordUtils.capitalize(who) } def salute() { println "Hello ${name}!" } } g = new Greeter('world') // create object g.salute() // Output "Hello World!" On the command line: groovy -e "println 'Hello ' + args[0]" World Enjoy making your code groovier !!!! If you wish to stay up-to-date with our vibrant community, you can learn more about: And below, you will find the latest announcements:
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=65589
2015-02-27T06:17:18
CC-MAIN-2015-11
1424936460576.24
[]
docs.codehaus.org
Introduction: Sencha Animator Guided Tour Contents To make effective use of Sencha Animator, it’s important to learn the major parts of the interface and feature groupings. This guided tour of Animator provides a brief introduction to some Animator functionality and suggests exercises to help you learn your way around the tool. More detail about using each part of Animator is provided in the Animator Reference. Start with our guide overview and then look for the section of the reference that corresponds to the part of Animator you want to use. Exercise: Download and launch Animator - Download the free trial version of Animator and install it. - Launch Animator to open the main application window, shown just below, and follow along with the tour by following the directions after each Exercise title. Tools Panel Select from different types of objects to add to a project using the tool panel or select the pointer to edit elements already in the project. Choose from rectangles, rounded rectangels, text, images, video and iframe embeds, rectangles to a project - Click the Rectangle icon in the Tools Panel. - Position the cursor anywhere over the Stage and click it once. A square appears where you positioned the cursor, and the element is listed in the Object Tree with the name New Rectange. - Double-click the text New Rectangleand type in any name you want to rename the rectangle. - Select the Rectangle icon again and position the cursor over a different spot in the Stage. - Click and drag to both create and size a rectangle. Name the second rectangle,. Make sure you are selcting the base state of the object (The selection color should be purple), if not double click the object on the Stage to select its base state. -. Timeline and Object Tree Use the Timeline to set keyframes within a scene's timelines —. Notice that the selection color is blue when a keyframe is selected. top of the Timeline enable viewing of a scene using the arrow button, or pause, fast forward, or rewind the scene. - Dragging the slider next to the magnifying glass at the upper right of the Timeline zooms the Timeline in or out, enabling you to set keyframes at very small or large intervals. - Clicking the small box underneath the lock next to an object name locks that object so that it cannot be edited on the stage.. Selections Understanding different selection modes is critical to working well with Animator. There are two main selection modes: - Base state (purple) - This edits the base of the object, e.g. a state that is persistent across all keyframes. - Keyframe state (blue) - With keyframe selected - This edits the object at a certain point in time. Properties set on a keyframe will typically override or be added on top of properties defined in the base state. - With object selected - This shows you the object at a point in time, but not on top of a keyframe. If recording mode is on, editing the object will create a keyframe at that time. This can also be called "the potential keyframe state". Selections can be toggled between the different types by utilizing the navigation in the top right of the Properties panel when an object or keyframe is selectede. For more info check Object Selection States. Menu Bar (not shown) Use the Animator menu bar to open, save, preview, and export projects. It also contains high-level control over project elements, including duplicating scenes and timelines; duplicating, removing, and positioning objects within a project; deleting keyframes and setting keyframe time. Exercise: Save a project - In the Menu bar, select Edit, then Save Project. - Give the project a name and save it to a location that will be easy to remember. Clock Use clock to step through animations or to set precise time. Record button Turns Recording mode off and on. Default position is on. With Recording mode on, move objects in Stage to create new keyframes and determine positions for animations. Stage Control the position, size, rotation,/Control-C on the keyboard to copy the image. - Hit Command/Control timelines, and navigate between them using the Scenes Panel. New scenes are added by clicking the "New Scene" button in the lower center of the Scenes Panel. Scenes and timelines are duplicated using commands from the Menu Bar. Delete scenes and timelines by clicking the "×" next to them. Reorder scenes and timelines by dragging them forward or backward in the panel. Navigate by clicking the scene or timelines in the Scene Panel. The contents of the scene and timeline will appear in the Stage so you can edit it. Exercise: Add a scene and rename it - Click the "New Scene" button in the lower center corner of the Scenes Panel. - Rename it by double-clicking the text Scene 2that appears below the new scene in the panel and typing in a new name. Exercise: Add a timeline and rename it - Click the "New Timeline" button in the lower center of a Scene. - Rename it by double-clicking the text Timeline. For complete instructions on how to use the Properties Panel, use Properties Index as a reference; manage project assets and the library of symbols. Project Panel In the Project Panel you can set project properties. Export Panel In the Export Panel you can set export properties..
http://docs.sencha.com/animator/1.5/?_escaped_fragment_=/guide/intro_tour
2015-02-27T05:59:36
CC-MAIN-2015-11
1424936460576.24
[array(['guides/intro_tour/animator_tour_1_5.png', 'Animator Interface animatortour'], dtype=object) array(['guides/intro_tour/timelinePanel.png', 'Timeline Panel timeline'], dtype=object) array(['guides/intro_tour/scenepanel.png', 'Scene Panel scene'], dtype=object) array(['guides/intro_tour/properties.png', 'Properties Panel properties'], dtype=object) ]
docs.sencha.com
<. The object-oriented way of writing plugins involves writing a subclass of JPlugin, a base class that implements the basic properties of plugins. In your methods, the following properties are available: $this->params: the parameters set for this plugin by the administrator $this->_name: the name of the plugin $this->_type: the group (type) of the plugin; } } ?>.
https://docs.joomla.org/index.php?title=J2.5:Creating_a_Plugin_for_Joomla&diff=101461&oldid=101038
2015-02-27T06:54:04
CC-MAIN-2015-11
1424936460576.24
[]
docs.joomla.org
You ever want to run multiple instance of tomcats with the same install base, but tired of copy and setup server.xml file? Attached are couple Groovy scripts that create new server instances, and a tool to quickly setup a new Groovlet webapp. new_tomcat_instance.groovy Assume you have installed Tomcat6 or Tomcat5 in /opt/tomcat directory. Run it again to create another instnace with mytomcat2, and it should configure to port 8082 and so on... Each new server instance will contains a ROOT webapp that list all other webapps for quick links. Also, the server instance is configured with Tomcat manager webapp enabled with a user: admin. If you are in this stage, you ought to know where to look for your password. To remove a previous installed instance new_webapp.groovy This script will create a new webapp directory structure with all the Groovlet setup ready. Start your server and you have a webapp ready to go! The mysqlreport.groovy is a updated version of Andrew Glover's And you will need mysql jdbc driver jar copy into mywebapp/WEB-INF/lib to work.
http://docs.codehaus.org/pages/viewpage.action?pageId=40697881
2015-02-27T05:58:18
CC-MAIN-2015-11
1424936460576.24
[array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/emoticons/smile.png', '(smile)'], dtype=object) ]
docs.codehaus.org
: And, Download/registration/purchase URL field. URL can target: If your extension is a free direct download link then all your work is done and you can ignore the next step :).: my no means compulsory) to point this to your login page. Note an empty value of this parameter will default to pointing to the Joomla User Component Login page..
https://docs.joomla.org/index.php?title=J3.3:Install_From_Web_For_Developers&diff=104825&oldid=104824
2015-02-27T06:35:13
CC-MAIN-2015-11
1424936460576.24
[]
docs.joomla.org
Administration Guide Local Navigation Sending software and BlackBerry Java Applications to BlackBerry devices - Managing BlackBerry Java Applications and BlackBerry Device Software - the users that have a BlackBerry Java Application installed on their BlackBerry devices - View how the BlackBerry Administration Service resolved software configuration conflicts for a user account - Reconciliation rules for conflicting settings in software configurations Previous topic: Activate a Wi-Fi enabled BlackBerry device Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/27983/Sending_sw_and_java_apps_to_BB_devices_320037_11.jsp
2015-02-27T06:21:31
CC-MAIN-2015-11
1424936460576.24
[]
docs.blackberry.com
scipy.signal.impulse2¶ - scipy.signal.impulse2(system, X0=None, T=None, N=None, **kwargs)[source]¶ Impulse) >>> from scipy import signal >>> system = ([1.0], [1.0, 2.0, 1.0]) >>> t, y = signal.impulse2(system) >>> import matplotlib.pyplot as plt >>> plt.plot(t, y)
http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.signal.impulse2.html
2015-02-27T06:04:06
CC-MAIN-2015-11
1424936460576.24
[]
docs.scipy.org
JBoss.orgCommunity Documentation Version 5.4.0.Beta2. Assign each process to a server. Hard constraints: Every server should be able to handle the sum of each of the minimal hardware requirements (CPR, RAM, network bandwidth) of all its processes. Soft constraints: Each server that has one or more processes assigned, has a fixed maintenance cost. Minimize the total cost. This is a form of bin packing.. Schedule each lecture into a timeslot and into a room. The problem is defined by the International Timetabling Competition 2007 track 3.. For each shift, assign a nurse to work that shift. The problem is defined by the International Nurse Rostering Competition 2010.rl>); choose for me? That class is a planning entity. Most use cases have only 1 planning entity class. In real-time planning, problem facts can change during planning, because the problem itself changes. However, that doesn't make them planning entities.. Therefore, you can set a difficultyComparatorClass to the @PlanningEntity annotation: @PlanningEntity(difficultyComparatorClass = CloudProcessAssignmentDifficultyComparator.class) public class CloudProcessAssignment { // ... } public class CloudProcessAssignmentDifficultyComparator implements Comparator<CloudProcessAssignment> { public int compare(CloudProcessAssignment a, CloudProcessAssignment b) { return new CompareToBuilder() .append(a.getCloudProcess().getRequiredMultiplicand(), b.getCloudProcess().getRequiredMultiplicand()) .append(a.getCloudProcess().getId(), b.getCloudProcess() changes during planning. For example, a Queen's row property is a planning variable. Note that even though a Queen's row property changes to another Row during planning, no Row instance itself is changed. A planning variable points to a planning value. A planning variable getter needs to be annotated with the @PlanningVariable annotation. Furthermore, it needs a @ValueRange* annotation too. @PlanningEntity public class Queen { private Row row; // ... @PlanningVariable @ValueRangeFrom. A planning value range is the set of possible planning values for a planning variable. This set can be a discrete (for example row 1, 2, 3 or 4) or continuous (for example any double between 0.0 and 1.0). There are several ways to define the value range of a planning variable, assigned to a Row yet. cache); } delta based score is uncorrupted) to fail-fast on rule engine bugs. The trace mode is very slow (because it doesn't rely on delta based score calculation). The debug mode is reproducible (see the reproducible mode) and also turns on mostability,). The. For example, set it to DEBUG logging, to see when the phases end and how fast steps are taken: INFO Sovling heuristic finished: step total (4), time spend (6), best score (-1). DEBUG Step index (0), time spend (10), score (-1), best score (-1), accepted move size (12) for picked step (col1@row2 => row3). DEBUG Step index (1), time spend (12), score (0), new best score (0), accepted move size (12) for picked step (col3@row3 => row2). INFO Phase local search ended: step total (2), time spend (13), best score (0). INFO Solving ended: time spend (13), best score (0), average calculate count per second (4846). All time spends are in milliseconds.>>efinition> >Calculator instance is asserted into the WorkingMemory as a global called scoreCalculator. Your score rules need to (direcltyCalculator. make it easy and scalable to add additional soft or hard constraints such as "a teacher shouldn't teach more then 7 hours a day". It does delta based score calculation without any extra code. However it tends to be not suited to use property tabu or move property(SolutionDirector solutionDirector); } For example: public class ExaminationS, y)); } } return moveList; } } Future versions might also support move generation by DRL. To get started quickly, Planner comes with a few build-in MoveFactory implementations:. GenericSwapPillarMoveFactory: A GenericSwapPillarMove swaps all the planning variables of 2 pillars. A pillar is a set of planning entities that have the same planning values for all those planning variables. For example: Given course C10, course 11 and course 12 in room R1 and period P1 and course C20 in room R2 and period P2, put course C10, course 11 and course 12 in room R2 and period P2 and put course C20 in room R1 and period P1. To use one or multiple build-in MoveFactory implementations, configure it as a Selector: > They are slightly slower than a custom implementation, but equally scalable. move size (12) for picked step (col1@row0 => row3). DEBUG Step index (1), time spend (31), score (-1), new best score (-1), accepted move size (12) for picked step (col0@row0 => row1). DEBUG Step index (2), time spend (40), score (0), new best score (0), accepted move size (12) for picked step (col3@row0 => row2). INFO Phase local searchurst score (-4), accepted (true) for move (col0@row0 => row1). TRACE Move score (-4), accepted (true) for move (col0@row0 => row2). TRACE Move score (-4), accepted (true) for move (col0@row0 => row3). ... TRACE Move score (-3), accepted (true) for move (col1@row0 => row3). ... TRACE Move score (-3), accepted (true) for move (col2@row0 => row3). ... TRACE Move score (-4), accepted (true) for move (col3@row0 => row3). DEBUG Step index (0), time spend (6), score (-3), new best score (-3), accepted move size .ShiftAssignmentSwitchMoveFactory</moveFactoryClass> </selector> <selector> <moveFactoryClass>org.drools.planner.examples.nurserostering.solver.move.factory.ShiftAssignmentPillarPartSwitchMoveFactory</moveFactoryClass> </selector> </selector>> Property tabu makes a property of recent steps tabu. For example, it can make the queen tabu, so that a recently moved queen can't be moved. <acceptor> <propertyTabuSize>5</propertyTabuSize> </acceptor> To use property tabu, your moves must implement the TabuPropertyEnabled interface, for example: public class YChangeMove implements Move, TabuPropertyEnabled { private Queen queen; private Row toRow; // ...> <propertyTabuSize>5</property contributing it back as a pull request on github and we'll take it along in future refactors and optimize it. Drools Planner supports several optimization algorithms (as solver phases), but you're probably wondering which is the best one? Although some optimization algorithms generally perform better then others, it really depends on your problem domain. Most solver phases have settings which can be tweaked. Those settings can influence the results a lot, although.rl>> <solutionTabuSize>1000</solution> <moveTabuSize>5</move> <propertyTabuSize>5</propertyTabuSize> </acceptor> <forager> <pickEarlyType>NEVER</pickEarlyType> </forager> </localSearch> </solver> </solverBenchmark> </plannerBenchmark> This PlannerBenchmark will try 3 configurations (1 solution tabu, 1 move tabu and 1 property tabu) on 2 data sets (32 and 64 queens), so it will run 6 solvers. Every solverBenchmark entity contains a solver configuration (for example with a local search solver phase) and one or more inputSolutionFile elements. file will be written in that directory.); } By default, a XStreamProblemIO instance is used and you just need to configure your Solution class as being annotated with XStream: <problemBenchmarks> <xstreamAnnotatedClass>org.drools.planner.examples.nqueens.domain.NQueens</xstreamAnnotatedClass> <inputSolutionFile>data/nqueens/unsolved/unsolvedNQueens32.xml</inputSolutionFile> ... </problemBenchmarks> However, your input files need to have been written with a XStreamProblemIO instance.> The benchmarker supports outputting itProcessAssignment cloudProcessAssignment : cloudBalance.getCloudProcessAssignmentList()) { if (ObjectUtils.equals(cloudProcessAssignment.getCloudComputer(), cloudComputer)) { FactHandle cloudProcessAssignmentHandle = workingMemory.getFactHandle(cloudProcessAssignment); cloudProcessAssignment.setCloudComputer(null); workingMemory.retract(cloudProcess.
http://docs.jboss.org/drools/release/5.4.0.Beta2/drools-planner-docs/html_single/index.html
2015-02-27T07:05:23
CC-MAIN-2015-11
1424936460576.24
[]
docs.jboss.org
Import your contacts and calendar appointments from Microsoft Outlook If your Microsoft Outlook contacts and calendar appointments are synced with a web-based email account added to your device, or if your device is running BlackBerry 10 OS version 10.1, this import isn't required. With BlackBerry 10 OS version 10.1 you can perform a 2-way sync of your contacts and calendar data. See the related link below. You can import the contacts and calendar appointments from Microsoft Outlook to your device running BlackBerry 10 OS or later. This import is a one-way transfer of data from your computer to your BlackBerry device. It is important to know that every time you import this data from your computer, the most up-to-date data on your computer replaces what is currenty on the device. Contacts and calendar appointments on your device won’t remain synced if you choose to import this data again. -.
http://docs.blackberry.com/en/smartphone_users/deliverables/53140/tom1362401071315.jsp
2015-02-27T06:06:16
CC-MAIN-2015-11
1424936460576.24
[]
docs.blackberry.com
You are viewing an old version of this page. View the current version. Compare with Current View Page History Version 83 Lesser GNU General Public License 3.0. Auto-cleanup of repositories decalred in POMs Can remove external repository declarations in POMs (which is bad practice): Cleanup of repos used in active profiles, cleanup all repos or no cleanup. On the fly conversion of M1 to M2 with custom mappings for ambiguous paths On the fly conversion of M2 to M1 Artifactory is a Maven 2 only repository manager by design.) Ivy modules search Properties search Search custom properties attached to files and folders (paid add-on) Navigate to artifacts tree browser from search result Reports Report for Problem Artifacts Intentionally blocks bad poms in runtime instead of polluting your repository and reporting after the fact (paid add. Copy Artifacts Copy artifacts between repositories + dry-run to check for warnings + auto metadata recalculation. you own logo image and add custom footer text (OSS). edit pom Attach searchable XML metadata to files or folders Searchable custom metadata Strongly-typed user-defined Properties Tag files and folders with you user defined searchable properties via the UI (paid add-on) Attach metadata as part of deployment Attach metadata during Maven deployment or via simple REST Security Framework Redback (database required) Spring Security (Acegi) Jsecurity Role Based Able to use LDAP groups (authorization from ldap) Only Pro Supports multiple realms in order (ie LDAP then fallback to internal) With control of whether to fallback to internal users or.
http://docs.codehaus.org/pages/viewpage.action?pageId=136675714
2015-02-27T06:18:22
CC-MAIN-2015-11
1424936460576.24
[]
docs.codehaus.org
This) which lists all of the classes in the Framework API. If you would like to help us with this massive project, then all you need to do is register on this wiki. You don't need to join a Working Group and you don't need to ask permission. You just register and get started. A good idea is to pick a class you know something about, or have spent time researching, and start there. Also feel free to add examples, improve explanations, or correct mistakes on existing pages. We are only documenting the classes in the /libraries/joomla directory. These are the Joomla Framework classes. Some of the material developed on the wiki API Reference will gradually be merged into the phpDocumentor tags in the source code itself and hence will end up on too. Our inspiration is the online reference manual for PHP. For example, see this page: which can be accessed by a simple, clearly defined, standardised URL and which permits user comments to be added. The starting page for the API Reference is the Framework page (Joomla 1.6 Framework | Joomla 1.5 Framework) where you can see a complete list of all the classes in the API. Each class will be given its own wiki page linked to from the Framework page. Each class page includes a list of all the public methods available and each of those methods will also have its own page (actually a "sub-page"). To simplify the documentation process, the class and method pages have already been created automatically. We reduced the documentation process to three pieces of knowledge, only humans can provide:.
https://docs.joomla.org/index.php?title=Archived:API_Reference_Project&diff=100371&oldid=25798
2015-02-27T07:22:11
CC-MAIN-2015-11
1424936460576.24
[]
docs.joomla.org
scipy.special.clpmn¶ - scipy.special.clpmn(m, n, z, type=3). Notes By default, i.e. for type=3, phase conventions are chosen according to [R229] such that the function is analytic. The cut lies on the interval (-1, 1). Approaching the cut from above or below in general yields a phase factor with respect to Ferrer’s function of the first kind (cf. lpmn). For type=2 a cut at |x|>1 is chosen. Approaching the real values on the interval (-1, 1) in the complex plane yields Ferrer’s function of the first kind. References
http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.clpmn.html
2015-02-27T06:04:19
CC-MAIN-2015-11
1424936460576.24
[]
docs.scipy.org
Sets the fragment part of the URI represented by the JURI object. void setFragment( $anchor ) where: In this example, a URI object is created and a fragment is added to it. $uri = ''; $u =& JURI::getInstance( $uri ); echo 'Before: ' . $u->toString() . "\n"; $u->setFragment( 'anchorthat' ); echo 'After : ' . $u->toString(); would output Before: After :
https://docs.joomla.org/index.php?title=JURI/setFragment&oldid=70860
2015-02-27T06:57:39
CC-MAIN-2015-11
1424936460576.24
[]
docs.joomla.org
o > Documentazione sui prodotti > Documentazione per gli sviluppatori > Plazmic Content Developer's Kit > User Guide Plazmic Composer for BlackBerry Smartphones - 4.7.
http://docs.blackberry.com/it-it/developers/deliverables/7116/Add_one_or_more_filters_to_a_bitmap_image_630022_11.jsp
2015-02-27T06:23:22
CC-MAIN-2015-11
1424936460576.24
[]
docs.blackberry.com
Action Maps Note Lumberyard's Input component replaces legacy action maps. For more information, see Input in the Amazon Lumberyard User Guide. The Action Map Manager provides a high-level interface to handle input controls inside a game. The Action Map system is implemented in Lumberyard, and can be used directly by any code inside Lumberyard or the GameDLL. Initializing the Action Map Manager The Action Map Manager is initialized when Lumberyard is initialized. Your game must specify the path for the file defaultProfile.xml (by default, the path is Game/Libs/Config/defaultProfile.xml). You can do this by passing the path to the manager. For example: IActionMapManager* pActionMapManager = m_pFramework->GetIActionMapManager(); if (pActionMapManager) { pActionMapManager->InitActionMaps(filename); } Upon initialization, the Action Map Manager clears all existing initialized maps, filters, and controller layouts. Receiving Actions During Runtime You can enable the feature that allows action maps to receive actions during runtime. Use the following code to enable or disable an action map during runtime: pActionMapMan->EnableActionMap("default", true); To receive actions, implement the IActionListener interface in a class.
https://docs.aws.amazon.com/lumberyard/latest/developerguide/controllers-action-maps.html
2018-02-17T21:53:37
CC-MAIN-2018-09
1518891807825.38
[]
docs.aws.amazon.com
The Platform Services Controller handles the authentication between Site Recovery Manager and vCenter Server at the vCenter Single Sign-On level. All communications between Site Recovery Manager and vCenter Server instances take place over transport layer security (TLS) connections. Previous versions of Site Recovery Manager supported both secure sockets layer (SSL) and TLS connections. This version of Site Recovery Manager only supports TLS, due to weaknesses identified in SSL 3.0. Solution User Authentication In Site Recovery Manager 5.x, you used either credential-based authentication or certificate-based authentication to authenticate with vCenter Server. Site Recovery Manager 6.x uses solution user authentication to establish secure communication to remote services, such as the Platform Services Controller and vCenter Server. A solution user is a security principal that the Site Recovery Manager installer generates. The installer assigns a private key and a certificate to the solution user and registers it with the vCenter Single Sign-On service. The solution user is tied to a specific Site Recovery Manager instance. You cannot access the solution user private key or certificate. You cannot replace the solution user certificate with a custom certificate. After installation, you can see the Site Recovery Manager solution user in the Administration view of the vSphere Web Client. Do not attempt to manipulate the Site Recovery Manager solution user. The solution user is for internal use by Site Recovery Manager, vCenter Server, and vCenter Single Sign-On. During operation, Site Recovery Manager establishes authenticated communication channels to remote services by using certificate-based authentication to acquire a holder-of-key SAML token from vCenter Single Sign-On. Site Recovery Manager sends this token in a cryptographically signed request to the remote service. The remote service validates the token and establishes the identity of the solution user. Solution Users and Site Recovery Manager Site Pairing When you pair Site Recovery Manager instances across vCenter Single Sign-On sites that do not use Enhanced Linked Mode, Site Recovery Manager creates an additional solution user for the remote site at each site. This solution user for the remote site allows the Site Recovery Manager Server at the remote site to authenticate to services on the local site. When you pair Site Recovery Manager instances in a vCenter Single Sign-On environment with Enhanced Linked Mode, Site Recovery Manager at the remote site uses the same solution user to authenticate to services on the local site. Site Recovery Manager SSL/TLS Server Endpoint Certificates Site Recovery Manager requires an SSL/TLS certificate for use as the endpoint certificate for all TLS connections established to Site Recovery Manager. The Site Recovery Manager server endpoint certificate is separate and distinct from the certificate that is generated during the creation and registration of a Site Recovery Manager solution user. For information about the Site Recovery Manager SSL/TLS endpoint certificate, see Creating SSL/TLS Server Endpoint Certificates for Site Recovery Manager.
https://docs.vmware.com/en/Site-Recovery-Manager/6.1/com.vmware.srm.install_config.doc/GUID-4FBC345A-F3CA-4A0A-9FC2-970BBFB293CD.html
2018-02-17T21:41:20
CC-MAIN-2018-09
1518891807825.38
[]
docs.vmware.com
You can use the reports that vSphere Replication compiles to optimize your environment for replication, identify problems in your environment, and reveal their most probable cause. Server and site connectivity, number of RPO violations, and other metrics give you, as an administrator, the information you need to diagnose replication issues. The following sections contain examples of interpreting the data displayed under Reports on the vSphere Replication tab under Monitor. RPO Violations The large number of RPO violations can be caused by various problems in the environment, on both the source and the target site. With more details on historical replication jobs, you can make educated decisions on how to manage the replication environment. Transferred Bytes Corelating the total number of transferred bytes and the number of RPO violations can help you make decisions on how much bandwidth might be required to meet RPO objectives. Replicated Virtual Machines by Host The number of replicated virtual machines by host help you determine how replication workload is distributed in your environment. For example, if the number of replicated virtual machines on a host is high, the host might be overloaded with replication jobs. You might want to verify that the host has enough resources to maintain all replication jobs. If needed, you can check for hosts with low number of replicated virtual machines and optimize the allocation of resources in your environment.
https://docs.vmware.com/en/vSphere-Replication/6.0/com.vmware.vsphere.replication-admin.doc/GUID-E9B5A24A-0BC1-43A7-9FBD-00A967E50C84.html
2018-02-17T21:42:58
CC-MAIN-2018-09
1518891807825.38
[]
docs.vmware.com
Configuring¶ After you installed your Ganeti Web Manager instance with setup script it’s time for some configuration. Configuration of Ganeti Web Manager can be defined with YAML, a human-readable markup language. Ganeti Web Manager also supports configuration through settings.py. Ganeti Web Manager supports settings.py for those that do not wish to use YAML; however, YAML configuration is preferred. The YAML configuration method makes it much easier to store settings outside of the project’s repository, which makes managing settings with a configuration management tool easier and safer. The YAML configuration file is always named config.yml. You can customize the location Ganeti Web Manager looks for this file by setting the GWM_CONFIG_DIR environmental variable. The current default is /opt/ganeti_webmgr/config. By default you will need to put your yaml config in /opt/ganeti_webmgr/config/config.yml. If you want to customize the location you can set GWM_CONFIG_DIR like so: $ export GWM_CONFIG_DIR='/etc/ganeti_webmgr' This will cause Ganeti Web Manager to look for your config file at /etc/ganeti_webmgr/config.yml. When both config.yml and settings.py are present, any settings stored in settings.py take precedence. Note A quick note about settings. Any setting value which contains an - or :, or any other character used by yaml, must be wrapped in quotes. Example: localhost:8000 becomes "localhost:8000". Creating configuration files¶ To get started configuring Ganeti Web Manager with YAML, copy the config.yml.dist to config.yml in the directory where you want your settings: $ cp /path/to/gwm/ganeti_webmgr/ganeti_web/settings/config.yml.dist /opt/ganeti_webmgr/config/config.yml Alternatively, to configure Ganeti Web Manager with the standard Django settings.py, copy settings.py.dist to settings.py in the same directory it is in: $ cp /path/to/gwm/ganeti_webmgr/ganeti_web/settings/settings.py.dist \ /path/to/gwm/ganeti_webmgr/ganeti_web/settings/settings.py Databases¶ Ganeti Web Manager supports PostgreSQL, MySQL, Oracle, and SQLite databases. The type of database and other configuration options must be defined in either settings.py or config.yml. These settings are not set by default like most other settings in Ganeti Web Manager. Be sure to actually configure your database settings. Configuring SQLite in config.yml:. Configuring SQLite in settings.py:. } } For PostgreSQL, Oracle, and MySQL, replace .sqlite in the engine field with .postgresql_psycopg2, .oracle, or .mysql respectively: # config.yml DATABASES: default: ENGINE: django.db.backends.mysql NAME: database_name USER: database_user PASSWORD: database_password HOST: db.example.com PORT: # leave blank for default port Secret Keys¶ By default Ganeti Web Manager creates a SECRET_KEY and a WEB_MGR_API_KEY for you the first time you run a command using django-admin.py, and puts this key into a file located at /opt/ganeti_webmgr/.secrets/SECRET_KEY.txt. This is to make initial setup easier, and less hassle for you. This key is used for protection against CSRF attacks as well as encrypting your Ganeti cluster password in the database. Once set, you should avoid changing this if possible. If you want to have better control of this setting you can set the SECRET_KEY setting in config.yml like so: SECRET_KEY: ANW61553mYBKJft6pYPLf1JbTeHKLutU Please do not use this value, but instead generate something random yourself. You do not want to share this, or make it publicly accessible. This can be used to avoid protections Ganeti Web Manager has implemented for you. If you are using the SSH Keys feature to add keys to VMs with Ganeti Web Manager, you will also need to set the WEB_MGR_API_KEY setting in config.yml or keep the value created for you in /opt/ganeti_webmgr/.secrets/WEB_MGR_API_KEY.txt. This is the same value you will use when running the sshkeys.py or sshkeys.sh scripts. Similarly, it should be something impossible to guess, much like the SECRET_KEY setting: WEB_MGR_API_KEY: 3SqmsCnNiuDY9lAVIh3Tx3RIJfql6sIc Again, do not use the value above. If anyone gains access to this key, and you are using the sshkeys feature, it will allow them to add arbitrary ssh keys to your Virtual Machines. Note We have not included these settings in the example config.yml at the bottom of this page for security reasons. We do not want anyone copying the values we’ve used in our examples for security prone settings such as this. If you wish to set these yourself, you will need to manually add them to config.yml. Time zone and locale¶ Ganeti Web Manager supports time zones, translations and localizations for currency, time, etc. To find the correct time zone for your locale, visit the List of time zones. For language codes, see List of language codes. Not every language is supported by Ganeti Web Manager. Date and datetime format follows the Django date format. For instance, d/m/Y will result in dates formatted with two-digit days, months, and four- digit years. A standard configuration might look something like this: TIME_ZONE: America/Los_Angeles DATE_FORMAT: d/m/Y DATETIME_FORMAT: "d/m/Y H:i" LANGUAGE_CODE: "en-US" # Enable i18n (translations) and l10n (locales, currency, times). USE_I18N: True # If you set this to False, Django will not format dates, numbers and # calendars according to the current locale USE_L10N: True Registration and e-mails¶ To set up Ganeti Web Manager to send registration emails, you’ll need access to an SMTP server. You can configure the SMTP host, port, and email address: For more complicated email setups, refer to the Django email documentation. Allowing open registration means that users can create their own new accounts in Ganeti Web Manager. The users will then have the number of days set in ACCOUNT_ACTIVATION_DAYS to activate their account: ALLOW_OPEN_REGISTRATION: True ACCOUNT_ACTIVATION_DAYS: 7 More details can be found in the Open Registration documentation. Site root and static files¶ The site root, static root, and static url must also be set when configuring Ganeti Web Manager. The SITE_ROOT is the subdirectory on the website:<SITE_ROOT>. The current default is empty. The STATIC_ROOT is the directory on the filesystem that Ganeti Web Manager‘s static files will be placed when you run django-admin.py collectstatic. The current default is /opt/ganeti_webmgr/collected_static. STATIC_URL is the full url where Ganeti Web Manager will look when trying to obtain static files. The default for this is currently /static which means it will try looking at the same domain it is hosted on. For example if your hostname is it will look for them at. A standard configuration, putting Ganeti Web Manager at the root of the domain, might look like this: SITE_ROOT: /web_admin STATIC_ROOT: /opt/ganeti_webmgr/collected_static STATIC_URL: Haystack Search Settings¶ Haystack is Ganeti Web Manager‘s way of performing search indexing. It currently has one setting which you need to worry about. HAYSTACK_WHOOSH_PATH is the path to a location on the filesystem which Ganeti Web Manager will store the search index files. This location needs to be readable and writable by whatever user is running Ganeti Web Manager. Example users might be the apache or nginx user, or whatever user you’ve set the Ganeti Web Manager process to run as. The default path for this setting is /opt/ganeti_webmgr/whoosh_index. An example of this setting might be: HAYSTACK_WHOOSH_PATH: /opt/ganeti_webmgr/whoosh_index More details can be found in the search documentation. Other settings¶ ITEMS_PER_PAGE is a setting allowing you to globally limit or extend the number of items on a page listing things. This this currently defaults to 15 items per page, so your pages will have up to 15 VMs, clusters and node’s listed on a single page. You might adjust this to a lower value if you find that loading a large number on a single page slows things down. ITEMS_PER_PAGE: 20 Set VNC_PROXY to the hostname:port pair of your VNCAuthProxy server. The VNC AuthProxy does not need to run on the same server as Ganeti Web Manager. VNC_PROXY: "localhost:8888" LAZY_CACHE_REFRESH (milliseconds) is the fallback cache timer that is checked when the object is instantiated. It defaults to 600000ms, or ten minutes. LAZY_CACHE_REFRESH: 600000 RAPI_CONNECT_TIMEOUT is how long Ganeti Web Manager will wait in seconds before timing out when requesting data from the ganeti cluster. RAPI_CONNECT_TIMEOUT: 3 Sample configuration¶ An annotated sample YAML configuration file is shown below: # config.yml # Django settings for ganeti_webmgr project. ##### Database Configuration ##### DATABASES: default: ENGINE: django.db.backends.sqlite3 # django.db.backends.sqlite3 # django.db.backends.postgresql # django.db.backends.mysql # django.db.backends.oracle # django.db.backends.postgresql_psycopg2 # Or path to database file if using. ##### End Database Configuration ##### # Site name and domain referenced by some modules to provide links back to # the site. SITE_NAME: Ganeti Web Manager SITE_DOMAIN: "localhost:8000" # TIME_ZONE: America/Los_Angeles DATE_FORMAT: d/m/Y DATETIME_FORMAT: "d/m/Y H:i" # Language code for this installation. All choices can be found here: # LANGUAGE_CODE: "en-US" ##### End Locale Configuration ##### # Enable i18n (translations) and l10n (locales, currency, times). # You really have no good reason to disable these unless you are only # going to be using GWM in English. USE_I18N: True # If you set this to False, Django will not format dates, numbers and # calendars according to the current locale USE_L10N: True # prefix used for the site. ie.<SITE_ROOT> # for the django standalone server this should be # for apache this is the url the site is mapped to, probably /tracker SITE_ROOT: "" # Absolute path to the directory that holds media. # Example: /home/media/media.lawrence.com/ STATIC_ROOT: /opt/ganeti_webmgr/collected_static # URL that handles the media served from STATIC_ROOT. # XXX contrary to django docs, do not use a trailing slash. It makes urls # using this url easier to read. ie. {{STATIC_URL}}/images/foo.png STATIC_URL: /static ##### Registration Settings ##### ACCOUNT_ACTIVATION_DAYS: 7 # Email settings for registration EMAIL_HOST: localhost EMAIL_PORT: 25 DEFAULT_FROM_EMAIL: [email protected] # Whether users should be able to create their own accounts. # False if accounts can only be created by admins. ALLOW_OPEN_REGISTRATION: True ##### End Registration Settings ##### ####### Haystack Search Index settings ####### HAYSTACK_WHOOSH_PATH: /opt/ganeti_webmgr/whoosh_index ####### End Haystack Search Index settings ####### # GWM Specifics # The maximum number of items on a single list page ITEMS_PER_PAGE: 15 # Ganeti Cached Cluster Objects Timeouts # LAZY_CACHE_REFRESH (milliseconds) is the fallback cache timer that is # checked when the object is instantiated. It defaults to 600000ms, or ten # minutes. LAZY_CACHE_REFRESH: 600000 # VNC Proxy. This will use a proxy to create local ports that are forwarded to # the virtual machines. It allows you to control access to the VNC servers. # # Expected values: # String syntax: HOST:CONTROL_PORT, for example: localhost:8888. If # localhost is used then the proxy will only be accessible to clients and # browsers on localhost. Production servers should use a publicly accessible # hostname or IP # # Firewall Rules: # Control Port: 8888, must be open between Ganeti Web Manager and Proxy # Internal Ports: 12000+ must be open between the Proxy and Ganeti Nodes # External Ports: default is 7000-8000, must be open between Proxy and Client # Flash Policy Server: 843, must open between Proxy and Clients VNC_PROXY: "localhost:8888" # This is how long gwm will wait before timing out when requesting data from the # ganeti cluster. RAPI_CONNECT_TIMEOUT: 3
http://ganeti-webmgr.readthedocs.io/en/latest/getting_started/configuring.html
2018-02-17T21:34:07
CC-MAIN-2018-09
1518891807825.38
[]
ganeti-webmgr.readthedocs.io
Paint Mode The tools in Paint mode enable you to modify the appearance of your Landscape by selectively applying Material layers to parts of your Landscape. For more information about Landscape Materials, see Landscape Materials . Landscape Paint Mode also works in the VR Editor. For the controls for using Landscape in VR, see the VR Editor Controls . Painting Tools You can use the Painting Tools to modify the appearance of your Landscape by selectively applying layers of specially designed Landscape Materials to sections of your Landscape. The Painting Tools have some common options: For more information about Landscape Material layers, see Layers , later on this page. Paint The Paint tool increases or decreases the weight of the Material layer being applied to the Landscape, in the shape of the currently selected brush and falloff. Smooth The Smooth tool smooths the layer weight. The strength determines the amount of smoothing. Layer Smoothing Flatten The Flatten tool directly sets the selected layer's weight to the value of the Tool Strength slider. Noise The tool applies a noise filter to the layer weight. The strength determines the amount of noise. Layers A layer is the part of the assigned Landscape Material that you want to paint onto your Landscape to change its appearance. Landscape layers determine how a texture (or material network) is applied to a Landscape terrain. A Landscape can use multiple layers with different Textures, scaling, rotation, and panning blended together to create the final textured terrain. The layers defined in the Landscape Material automatically populate the list of Target Layers in the Landscape tool's Paint mode. Each layer is displayed with its name and a small thumbnail image. Whichever layer is selected is the one you can apply to the Landscape with the Painting Tools, according to the tools' options and settings, and to the brush you are using. Many of the Painting Tools are similar to the Sculpting Tools, and you use them similarly, but to manipulate the application of layers instead of the heightmap. You create layers in the Material itself. For more information about layers and Landscape Materials, see Landscape Materials . Layer Info Objects A layer info object is an asset that contains information about the Landscape layer. Every Landscape layer must have a layer info object assigned to it, or else it cannot be painted. You can create layer info objects from the Landscape tool. There are two kinds of layer info object, Weight-Blended and Non Weight-Blended: Weight-Blended - The usual kind of layers that affect each other. Painting a weight-blended layer will decrease the weight of all other weight-blended layers. For example, painting mud will remove grass, and painting grass will remove mud. Non Weight-Blended - Layers that are independent of each other. Painting a non weight-blended layer does not affect the weights of the other layers. These are used for more advanced effects, such as blending snow onto other layers: instead of having grass, mud, rock or snow, you would use a non weight-blended snow layer to blend between "grass, mud, or rock" and "snowy grass, snowy mud, or snowy rock." You can either create a layer info object from the layer itself, or reuse an existing layer info object from another Landscape. To create a layer info object: Press the plus icon to the right of the Layer name. Choose Weight-Blended Layer (normal) or Non Weight-Blended Layer. Choose the location to save the layer info object. After their creation, layer info objects exist as assets in the Content Browser, such as the following: They can then be reused by other Landscapes. Even though you can use the same layer info object in multiple Landscapes, within a single Landscape, you can use each layer info object only once. Each layer in a Landscape must use a different layer info object. To reuse an existing layer info object from another Landscape: Find and select the layer info object in the Content Browser. In the Landscape tool, in the Target Layers section, to the right of the layer with which you want to use the layer info type, click the Assign icon ( ). Layer info objects can only be used if their layer name matches the layer they were originally created for. The primary purpose of layer info objects is to act as a unique key for painted layer data, but they also contain a couple of user-editable properties: Orphaned Layers If a layer is removed from the Landscape Material after it has populated the Target Layers list of a Landscape, and it has painted data on the Landscape, it will be displayed in the list with a ? icon. This denotes an orphaned layer. Areas previously painted with this layer will likely appear black, but the exact behavior depends on your Landscape Material. Deleting Orphaned Layers You can delete orphaned layers from the Landscape, though it is recommended that you first paint over any areas where the layer was used. The painted layer data is preserved until the layer is deleted, so no information is lost if you make a mistake in the Landscape Material. To delete a layer from your Landscape: Click the X icon to the right of the layer's name. Weight Editing At every Landscape vertex, each layer has a weight specifying how much influence that layer has on the Landscape. Layers have no particular blending order. Instead, each layer's weight is stored separately and the results added. In the case of weight-blended layers, the weights add up to 255. Non weight-blended layers are independent of other layers and can have any weight value. You can use the Paint tool to increase or decrease the weight of the active layer. To do so, select the layer whose weight you want to adjust, and use one of the Painting tools to apply the layer to the Landscape. For Weight-Blended layers, as you increase the weight of one layer, the weight of the other layers will be uniformly decreased. Fully painting one layer will result in no weight on any other layer. When you are reducing a weight-blended layer by holding down Ctrl + Shift while painting, it is not clear what layer should be increased to replace it. The current behavior is to uniformly increase the weights of any other layers. Because of this behavior, it is not possible to paint all layers completely away. Instead of painting layers away, it is recommended you choose the layer you want to paint in its place, and paint that additively.
https://docs.unrealengine.com/latest/INT/Engine/Landscape/Editing/PaintMode/
2018-02-17T21:01:29
CC-MAIN-2018-09
1518891807825.38
[array(['./../../../../../images/Engine/Landscape/Editing/PaintMode/Landscape_Paint.jpg', 'Paint Tool'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/SculptMode/Landscape_Smooth.jpg', 'Smooth Tool'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/PaintMode/Landscape_Smooth_Layer_Before.jpg', 'Landscape Smooth Layer Before'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/PaintMode/Landscape_Smooth_Layer_After.jpg', 'Landscape Smooth Layer After'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/SculptMode/Landscape_FlattenTool.jpg', 'Flatten Tool'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/SculptMode/Landscape_Noise.jpg', 'Noise Tool'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/PaintMode/Landscape_Target.png', 'Landscape_Target.png'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/PaintMode/Landscape_Layers.jpg', 'Landscape_Layers.jpg'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/PaintMode/Landscape_InfoObject.jpg', 'Layer Info Object'], dtype=object) array(['./../../../../../images/Engine/Landscape/Editing/Landscape_MissingLayer.jpg', 'Missing Layer'], dtype=object) ]
docs.unrealengine.com
- Release Notes > - Release Notes for MongoDB 2.4 > - JavaScript Changes in MongoDB 2.4 JavaScript Changes in MongoDB 2.4¶ properties,: original = [4, 8, 15]; var [b, ,c] = a; // <== destructuring assignment print(b) // 4 print(c) // 15: var o = { name: 'MongoDB', version: 2.4 }; for each (var value in o) { print(value); } Instead, in version 2.4, you can use the for (var x in y) construct: var o = { name: 'MongoDB', version: 2.4 }; for (var prop in o) { var value = o[prop]; print(value); } You can also use the array instance method forEach() with the ES5 method Object.keys(): Object.keys(o).forEach(function (key) { var value = o[key]; print(value); }); Array Comprehension¶ V8 does not support Array comprehensions. Use other methods such as the Array instance methods map(), filter(), or forEach(). Example With V8, the following array comprehension is invalid: var a = { w: 1, x: 2, y: 3, z: 4 } var arr = [i * i for each (i in a) if (i > 2)] printjson(arr) Instead, you can implement using the Array instance method forEach() and the ES5 method Object.keys() : var a = { w: 1, x: 2, y: 3, z: 4 } var arr = []; Object.keys(a).forEach(function (key) { var val = a[key]; if (val > 2) arr.push(val * val); }) printjson(arr)": try { something() } catch (err if err instanceof SomeError) { print('some error') } catch (err) { print('standard error') } Conditional Function Definition¶ V8 will produce different outcomes than SpiderMonkey with conditional function definitions. Example The following conditional function definition produces different outcomes in SpiderMonkey versus V8: function test () { if (false) { function go () {}; } print(typeof go) }. function test () { var go; if (false) { go = function () {} } print(typeof go) } The refactored code outputs undefined in both SpiderMonkey and V8. Note ECMAscript prohibits conditional function definitions. To force V8 to throw an Error, enable strict mode. function test () { 'use strict'; if (false) { function go () {} } } The JavaScript code throws the following syntax error: SyntaxError: In strict mode code, functions can only be declared at top level or immediately within another function. String Generic Methods¶ V8 does not support String generics. String generics are a set of methods on the String class that mirror instance methods. Example The following use of the generic method String.toLowerCase() is invalid with V8: var name = 'MongoDB'; var lower = String.toLowerCase(name); With V8, use the String instance method toLowerCase() available through an instance of the String class instead: var name = 'MongoDB'; var lower = name.toLowerCase(); print(name + ' becomes ' + lower);: var arr = [4, 8, 15, 16, 23, 42]; function isEven (val) { return 0 === val % 2; } var allEven = Array.every(arr, isEven); print(allEven); With V8, use the Array instance method every() available through an instance of the Array class instead: var allEven = arr.every(isEven); print(allEven);. Thank you for your feedback! We're sorry! You can Report a Problem to help us improve this page.
http://docs.mongodb.org/manual/release-notes/2.4-javascript/
2015-07-28T05:46:24
CC-MAIN-2015-32
1438042981576.7
[]
docs.mongodb.org
Revision history of "JDocumentHTML::setHeadDataHeadData/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDocumentHTML::setHeadData== ===Description=== Set the html document head data. {{Description:JDocumentHTML::setHeadData}} <span class=..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=JDocumentHTML::setHeadData/1.6&action=history
2015-07-28T06:11:17
CC-MAIN-2015-32
1438042981576.7
[]
docs.joomla.org
Help Center Local Navigation Setting up the sample application in the BlackBerry JDE. - On the taskbar, click Start > Applications > Research In Motion > BlackBerry JDE 4.6.0 > JDE to open the BlackBerry® Java® Development Environment. - Press F5 to build the open projects and start the BlackBerry®: Run the sample application Was this information helpful? Send us your comments.
http://docs.blackberry.com/es-es/developers/deliverables/7649/Setting_up_for_JDE_organizer_1009971_11.jsp
2015-07-28T06:12:02
CC-MAIN-2015-32
1438042981576.7
[]
docs.blackberry.com
User Guide Local Navigation I can't open media files Try the following actions: - If you're trying to open a media file on your BlackBerry smartphone and your smartphone is connected to your computer, disconnect your smartphone from your computer. - If you're trying to open a media file on your computer using your smartphone as a USB drive, verify that you have closed the media transfer options in the BlackBerry Desktop. Related concepts Next topic: The media player screen closes Previous topic: I can't save media files Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/37644/I_cannot_open_media_files_61_1478853_11.jsp
2015-07-28T06:00:01
CC-MAIN-2015-32
1438042981576.7
[]
docs.blackberry.com
scipy.odr.ODR¶ - class scipy.odr. ODR(data, model, beta0=None, delta0=None, ifixb=None, ifixx=None, job=None, iprint=None, errfile=None, rptfile=None, ndigit=None, taufac=None, sstol=None, partol=None, maxit=None, stpb=None, stpd=None, sclb=None, scld=None, work=None, iwork=None)[source]¶ The ODR class gathers all information and coordinates the running of the main fitting routine. Members of instances of the ODR class have the same names as the arguments to the initialization routine. - Parameters - dataData class instance instance of the Data class - modelModel class instance instance of the Model class - Other Parameters - beta0array_like of rank-1 a rank-1 sequence of initial parameter values. Optional if model provides an “estimate” function to estimate these values. - delta0array_like of floats of rank-1, optional a (double-precision) float array to hold the initial values of the errors in the input variables. Must be same shape as data.x - ifixbarray_like of ints of rank-1, optional sequence of integers with the same length as beta0 that determines which parameters are held fixed. A value of 0 fixes the parameter, a value > 0 makes the parameter free. - ifixxarray_like of ints with same shape as data.x, optional. - jobint, optional an integer telling ODRPACK what tasks to perform. See p. 31 of the ODRPACK User’s Guide if you absolutely must set the value here. Use the method set_job post-initialization for a more readable interface. - iprintint, optional an integer telling ODRPACK what to print. See pp. 33-34 of the ODRPACK User’s Guide if you absolutely must set the value here. Use the method set_iprint post-initialization for a more readable interface. - errfilestr, optional string with the filename to print ODRPACK errors to. Do Not Open This File Yourself! - rptfilestr, optional string with the filename to print ODRPACK summaries to. Do Not Open This File Yourself! - ndigitint, optional integer specifying the number of reliable digits in the computation of the function. - taufacfloat, optional float specifying the initial trust region. The default value is 1. The initial trust region is equal to taufac times the length of the first computed Gauss-Newton step. taufac must be less than 1. - sstolfloat, optional float specifying the tolerance for convergence based on the relative change in the sum-of-squares. The default value is eps**(1/2) where eps is the smallest value such that 1 + eps > 1 for double precision computation on the machine. sstol must be less than 1. - partolfloat, optional float specifying the tolerance for convergence based on the relative change in the estimated parameters. The default value is eps**(2/3) for explicit models and eps**(1/3)for implicit models. partol must be less than 1. - maxitint, optional integer specifying the maximum number of iterations to perform. For first runs, maxit is the total number of iterations performed and defaults to 50. For restarts, maxit is the number of additional iterations to perform and defaults to 10. - stpbarray_like, optional sequence ( len(stpb) == len(beta0)) of relative step sizes to compute finite difference derivatives wrt the parameters. - stpdoptional array ( stpd.shape == data.x.shapeor stpd.shape == (m,)) of relative step sizes to compute finite difference derivatives wrt the input variable errors. If stpd is a rank-1 array with length m (the dimensionality of the input variable), then the values are broadcast to all observations. - sclbarray_like, optional sequence ( len(stpb) == len(beta0)) of scaling factors for the parameters. The purpose of these scaling factors are to scale all of the parameters to around unity. Normally appropriate scaling factors are computed if this argument is not specified. Specify them yourself if the automatic procedure goes awry. - scldarray_like, optional array (scld.shape == data.x.shape or scld.shape == (m,)) of scaling factors for the errors in the input variables. Again, these factors are automatically computed if you do not provide them. If scld.shape == (m,), then the scaling factors are broadcast to all observations. - workndarray, optional array to hold the double-valued working data for ODRPACK. When restarting, takes the value of self.output.work. - iworkndarray, optional array to hold the integer-valued working data for ODRPACK. When restarting, takes the value of self.output.iwork. - Attributes - dataData The data for this fit - modelModel The model used in fit - outputOutput An instance if the Output class containing all of the returned data from an invocation of ODR.run() or ODR.restart() Methods
https://docs.scipy.org/doc/scipy-1.5.3/reference/generated/scipy.odr.ODR.html
2021-05-06T03:53:50
CC-MAIN-2021-21
1620243988725.79
[]
docs.scipy.org
, when that Scene uses a Lighting Settings Asset with its Lighting Mode property set to Subtractive. In Subtractive Lighting Mode, all Mixed Lights in your Scene provide baked direct and indirect lighting. Unity bakes shadows cast by static GameObjectsThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info See in Glossary into the lightmapsA pre-rendered texture that contains the effects of light sources on static objects in the scene. Lightmaps are overlaid on top of scene geometry to create the effect of lighting. More info See in Glossary..
https://docs.unity3d.com/Manual/LightMode-Mixed-Subtractive.html
2021-05-06T04:47:09
CC-MAIN-2021-21
1620243988725.79
[]
docs.unity3d.com
Android Resources With Android additional files and static content are defined as resources, this includes images, layouts, strings, configuration, icons and more. Sometimes you will need to include resources with your AIR application either to configure a service (such as Firebase) or to provide visual assets for system interactions (such as notification icons). Note AIR in the past has not had a direct method for including these resources and instead you have had to build a native extension to package these resources with your application. However, with AIR 33.1.1.406, Harman has now introduced several options to simply include resources with your application. Gather ResourcesGather Resources The first step is always to assemble your resources into a folder. Generally we use a folder named res to keep it inline with the Android equivalent however you should be able to use any folder name. For example: Generally you will use tools to assemble these resources for you, such as the Android Asset Studio Notification Icon Generator PackagingPackaging Once you have your resources, you can use one of the methods below to package these resources with your application. Application DescriptorApplication Descriptor info Requires AIR 33.1.1.406 or higher The simplest method (and the method we suggest using) is specifying the resource directory in your application descriptor. To do this, simply add the resdir tag to your application descriptor: Place this at the same level as the android tag, eg: The resdir can be either the: - relative path (as above); - absolute path; When using a relative path, specify the location of the folder relative to your content, eg: We recommend using the relative path, especially if you are dealing with multiple developers where absolute paths could change between systems. Command LineCommand Line info Requires AIR 33.1.1.300 or higher If you can modify the adt command used to build your application you can add a command line option to specify the resources directory similar to the previous method. To do this, add the -resdir option to your command specifying the path to your resources directory. Unfortunately this process cannot be used by most IDEs as you cannot modify the command directly so we don't suggest using this method. Custom ANECustom ANE Previous to AIR 33.1.1.300/406 you would have had to generate a custom resources native extension and add this to your application. To this end we made a script available in the following repository to build this extension for you: This project uses an Apache Ant build script to create and package an ANE with your custom Android resources. Follow the guide in the repository to generate this extension. This is still a viable solution for AIR, but we recommend the application descriptor solution for new applications.
https://docs.airnativeextensions.com/docs/tutorials/android-resources/
2021-05-06T03:18:57
CC-MAIN-2021-21
1620243988725.79
[]
docs.airnativeextensions.com
Building with Docker¶ Officially supported and tested method for building with Docker¶ This method works for building u-boot and kernel packages as well as building full OS images. Building additional packages ( EXTERNAL_NEW) is not supported. Requirements¶ - x86/x64 Linux host that supports running a recent Docker daemon. Refer to Docker documentation for details. Docker version 17.06 CE or newer. Installation on Ubuntu Focal: apt-key adv --keyserver pool.sks-keyservers.net --recv-keys 0EBFCD88 echo "deb [arch=amd64] focal stable" > /etc/apt/sources.list.d/docker.list apt update apt install docker-ce Enough free disk space on the storage used for Docker containers and named volumes. Named volumes path can be changed using standard Docker utilites, refer to Docker documentation for details. Details¶ There are 2 options to start build process: - By passing configuration file name ( config-<conf_name>.conf), stored in userpatchesdirectory, as an argument: ./compile.sh docker <conf_name> - By passing addtional line arguments to compile.shafter docker: ./compile.sh docker KERNEL_ONLY=yes BOARD=cubietruck BRANCH=current KERNEL_CONFIGURE=yes The process creates and runs a named Docker container armbian with two named volumes armbian-cache and armbian-ccache, and mount local directories output and userpatches. Creating and running Docker container manually¶ NOTE: These methods are not supported by Armbian developers. Use them at your own risk. Example: Building Armbian using Red Hat or CentOS¶ First of all, it is important to notice that you will be able to build kernel and u-boot packages. The container method is not suitable for building full Armbian images (the full SD card image containing the userland packages). This setup procedure was validated to work with Red Hat Enterprise Linux 7. Preparing your build host¶ In order to be able to run Docker containers, if you have not done so, just install the Docker package: yum install -y docker By default, the docker service is not started upon system reboot. If you wish to do so: systemctl enable docker Ensure that you have the docker service running: systemctl start docker` Next step, chdir to a directory where you will be checking out the Armbian build repository. I use /usr/src. And then, check out using git (with shallow tree, using --depth 1, in order to speed up the process): cd /usr/src git clone --depth 1 And in order to not mistake the newly created build directory, I rename it to build-armbian. cd to the directory: mv build build-armbian cd build-armbian Preparing the Container¶ Our Build toolchain provides a scripted way to create a container and run the container. Run: ./compile.sh docker Give it some minutes, as it downloads a non-neglectible amount of data. After your image is created (named armbian), it will automatically spawn the Armbian build container. NOTICE: In some cases, it is possible that SELinux might block your access to /root/armbian/cache temporary build directory. You can fix it by either adding the correct SELinux context to your host cache directory, or, disabling SELinux. Get acquainted with the Build system. If you want to get a shell in the container, skipping the compile script, you can also run: docker run -dit --entrypoint=/bin/bash -v /mnt:/root/armbian/cache armbian_dev The above command will start the container with a shell. To get the shell session: docker attach <UUID of your container, returned in the above command> If you want to run SSH in your container, log in and install the ssh package: apt-get install -y ssh Now, define a password and prepare the settings so you sshd can run and you can log in as root: passwd sed -i -e 's/PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config mkdir /var/run/sshd chmod 0755 /var/run/sshd And finally start sshd: /usr/sbin/sshd Do NOT type exit - that will stop your container. To leave your container running after starting sshd, just type <Ctrl-P> and <Ctrl-Q>. Now you can ssh to your container.
https://docs.armbian.com/Developer-Guide_Building-with-Docker/
2021-05-06T04:27:39
CC-MAIN-2021-21
1620243988725.79
[]
docs.armbian.com
[…] Category: Tracking Expiration Items Overview Tags allows you to add labels or tags to expiration items to identify them with a specific characteristic. They’re commonly used to designate a status to the expiration item or just to classify them into certain groups for later filtering. You can have multiple tags configured in Expiration Reminder for many different things. Each […] No expiration date Overview There are times that you don’t have on hand the expiration date of an item but you would like to track it so you know that you need to get back to that item. For these case, Expiration Reminder allows you add or edit an item and set the expiration date and Not Set. […] Exporting to Excel Overview The export to Excel feature allows you export expiration items into an excel sheet. Exporting to Excel Follow these steps to export your items to Excel: Step 1: Click on Expirations on the top menu. Step 2: Click on Export to Excel. Filtering items on the Excel sheet The export to Excel feature works with […] Adding contacts to an item Overview If you need to send notifications to more than person for a specific item, you can use the contact feature. You can add as many contacts as needed to an expiration item. Adding contacts to an item Follow these steps to add contacts to an item: Step 1: Go to the contacts tab […] Attachments Overview Expiration Reminder allows to add attachments and files to items. There’s no limit on the number of files that can be attached. Attaching a file To attach a file, follow these steps: Step 1: On the expiration item screen, click on the Attachments tab. Step 2: Click on the Browse button and select […] Recovering a deleted expiration item Overview Items that have been previously deleted can be recovered in Expiration Reminder. When an item is deleted, it’s placed in a status where the item can no longer be seen in the item list, it doesn’t count for dashboard statistics and notifications no longer go out. Recovering a deleted item To recover a deleted […] Importing expiration items Overview Besides inputting items manually and using templates to speed up entering information, Expiration Reminder support importing items from an Excel spreadsheet, from a CSV file or from an XML file. You can also import items as simple as just importing the name and expiration date and you can go all the way and import […] Delete item Overview Deleting an item is done on the Expirations screen. Each item will have a link on right to delete it. Keep in mind that deleted items can be recovered in the future if needed. Delete an item To delete an item follow these steps: Step 1: Click on Expirations on the top menu. Step […] Item Views Overview Items views allows you save a search and filter criteria in Expiration Reminder so you don’t have to create the same filters over and over again. You can also see item views as predefined reports that you go back to at any time. Creating an item view To create an item view, follow […]
https://docs.expirationreminder.net/category/tracking-expiration-items/
2021-05-06T04:31:02
CC-MAIN-2021-21
1620243988725.79
[]
docs.expirationreminder.net
%. Returns a detailed overview of the entities pathId consists of, including their unique and occurrence IDs, positions, roles and literal value. The last two columns will only contain data if stemming is enabled for this domain through the $$$IKPSTEMMING domain parameter.()
https://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=%25iKnow.Queries.PathAPI
2021-05-06T04:05:17
CC-MAIN-2021-21
1620243988725.79
[]
docs.intersystems.com
Creating and using a configuration value using CloudBees Feature Management is simple. As soon as you define a configuration value in code and build and run your application, the newly defined configuration value will appear in the dashboard. 1. Creating a container class To create your first configuration value, you should do the following: Create a container class for your configuration. Define a configuration value inside the container class by picking the name, default value, and type. Here are example code snippets: 2. Registering the container class Once you have the container class defined, you need to register the instance to the CloudBees Feature Management SDK. This is done with the register SDK function. The register function accepts an instance of the Container class.
https://docs.cloudbees.com/docs/cloudbees-feature-management/latest/feature-flags/configuration-values
2021-05-06T03:12:12
CC-MAIN-2021-21
1620243988725.79
[]
docs.cloudbees.com
Kirkbymoorside Town Council Agenda for the Ordinary Meeting of the Town Council 15 March 2021 Issued on 10 March 2021 for a meeting of the Town Council to be held remotely via Zoom at 7pm on Monday 15 March February 28 February 2021 - To note the cost of £38 for renewal of the garden waste subscription for the cemetery - To review the Internal Audit Terms of Reference - To review the Risk Assessment Strategy and consider the financial and other risks facing the council and the internal and external controls in place for management thereof, and agree any revisions - To review the Council's Asset Register - To agree the appointment of an internal auditor - Path for Everyone - To receive correspondence from Kirkbymoorside Environment Group, in partnership with Ryedale Cycle Forum - To consider the request for the allocation of a bank account for management of financial transactions associated with the project - To consider the request for match-funding in the amount of £20,000 from the Council's reserve funds - Planning - To review planning applications: - 21/00217/73 | Variation of condition 02 of planning approval 20/00784/FUL dated 17.11.2020 - to allow alterations to the design of the cottage ornee | Ravenswick Hall Young Bank Lane Kirkbymoorside YO62 7LT - 21/00128/FUL | Change of use of agricultural land to allow the siting of 3 no. camping pods with associated parking and access track together with erection of sofa barn to be used in connection with Deepdale Farm wedding venue (May to Sept) and Airbnb use (Oct - April) | Deep Dale Farm House Village Street Keldholme Kirkbymoorside North Yorkshire YO62 6LE - Planning Application 21/00049/FUL | Change of use and alterations to stables to form 1no. four bedroom dwelling with associated parking and landscaping | Land At OS Field 04201 Village Street Keldholme Kirkbymoorside North Yorkshire - To note that the application will be considered by the Ryedale District Council Planning Committee on 16 March 2021 at 6pm - To receive the Officer's report and note the recommendation for this application will be Refusal - To receive quotations for the cost of installing a new boardwalk at Ryedale View play area and consider appointment of the works - Town Farm car park - To receive information on the installation of Electric Vehicle Charge Points - To receive information on progress of discussions with Ryedale District Council - To consider submitting a proposal to Ryedale District Council to negotiate the future management of parking charges - Devolution - To receive the Ministry of Housing, Communities and Local Government consultation proposals for locally led reorganisation of Local Government in North Yorkshire - To receive correspondence from Councillor Carl Les, Leader of North Yorkshire County Council - To receive correspondence from Councillor Keane Duncan, Leader of Ryedale District Council - Urban grass cutting 2021/22 - To receive correspondence from NYCC Highway Asset Management with regards to the Urban Grass Cutting 2021/22 - To note the annual contribution of £982.62 from NYCC and consider continued cutting of the urban highways visibility splays - To note the planters on West End have been secured in situ and signage applied to denote ownership by the Town Council - To receive information from the Community Safety Officer for Ryedale District regarding the Police, Fire and Crime Commissioner's (PFCC) priorities to ensure people are safe in the community - To receive the Public Wi-fi Usage Report for Q4 - To note the proposal by Post Office to move the Kirkbymoorside branch to 1 High Market Place, Kirkbymoorside, York, YO62 6AT - To note that the North York Moors National Park are inviting comments on the new Management Plan, closing date Thursday 1 April - To note that Power for People will be hosting a webinar for councils and organisations to discuss the Community Energy Revolution Campaign which would see the Local Electricity Bill made law, at 7pm on 17 March 2021. - 19 April 2021 Related Documents: Notice issued by L Bolland, Town Clerk to Kirkbymoorside Town Council
https://docs.kirkbymoorsidetowncouncil.gov.uk/doku.php/agenda2021-03-15
2021-05-06T02:44:08
CC-MAIN-2021-21
1620243988725.79
[]
docs.kirkbymoorsidetowncouncil.gov.uk
jax.numpy.column_stack¶ jax.numpy. column_stack(tup)[source]¶ Stack 1-D arrays as columns into a 2-D array. LAX-backend implementation of column_stack(). Original docstring below. Take a sequence of 1-D arrays and stack them as columns to make a single 2-D array. 2-D arrays are stacked as-is, just like with hstack. 1-D arrays are turned into 2-D columns first. - Parameters tup (sequence of 1-D or 2-D arrays.) – Arrays to stack. All of them must have the same first dimension. - Returns stacked – The array formed by stacking the given arrays. - Return type 2-D array
https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.column_stack.html
2021-05-06T04:06:24
CC-MAIN-2021-21
1620243988725.79
[]
jax.readthedocs.io
If an error occurs when StorageGRID attempts to communicate with an endpoint, a message is displayed on the Dashboard and the error message is shown in the Last Error column on the Endpoints page. No error is displayed if the permissions associated with an endpoint's credentials are incorrect. If any endpoint errors have occurred within the past 7 days, the Dashboard in the Tenant Manager displays an alert message. You can go the Endpoints page to see more details about the error. When you go to the Endpoints page, you can review the more detailed error message in the Last Error column. This column displays only the most recent error message for each endpoint, and it indicates how long ago the error occurred. Errors in red occurred within the past 7 days. Some errors might continue to be shown in the Last Error column even after they are resolved. To see if an error is current or to force the removal of a resolved error from the table, select the radio button for the endpoint, and click Test. Clicking Test causes StorageGRID to validate that the endpoint exists and that it can be reached with the current credentials. The connection to the endpoint is validated from one node at each site. If an endpoint error occurs, you can use the message in the Last Error column, clicking Save causes StorageGRID to validate the updated endpoint and confirm that it can be reached with the current credentials. The connection to the endpoint is validated from one node at each site. When StorageGRID validates an endpoint, it confirms that the endpoint's credentials can be used to contact the destination resource but does not confirm those credential's permissions. No error is displayed if the permissions associated with an endpoint's credentials are incorrect. If you receive an error when attempting to use a platform service (such as "403 Forbidden"), check the permissions associated with the endpoint's credentials.
https://docs.netapp.com/sgws-113/topic/com.netapp.doc.sg-tenant-admin/GUID-46F5CBFC-B106-4065-B366-3C4170898AE3.html
2021-05-06T04:26:42
CC-MAIN-2021-21
1620243988725.79
[]
docs.netapp.com
jax.numpy.polymul¶ jax.numpy. polymul(a1, a2, *, trim_leading_zeros=False)[source]¶ Find the product of two polynomials. LAX-backend implementation of polymul(). Setting trim_leading_zeros=True makes the output match that of numpy. But prevents the function from being able to be used in compiled code. Original docstring below. Finds the polynomial resulting from the multiplication of the two input polynomials. Each input must be either a poly1d object or a 1D sequence of polynomial coefficients, from highest to lowest degree. - Parameters a1 (array_like or poly1d object) – Input polynomials. a2 (array_like or poly1d object) – Input polynomials. - Returns out – The polynomial resulting from the multiplication of the inputs. If either inputs is a poly1d object, then the output is also a poly1d object. Otherwise, it is a 1D array of polynomial coefficients from highest to lowest degree. - Return type -
https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.polymul.html
2021-05-06T03:55:05
CC-MAIN-2021-21
1620243988725.79
[]
jax.readthedocs.io
OETokenizerBase¶ class OETokenizerBase This is an abstract base class used to represent an object that can efficiently split up a molecular file into a stream of tokens, each token representing one molecule in the stream. Implementations of this class are returned by the OEGetTokenizer function. This class is expected to do the minimal amount of string parsing possible to determine the next chunk of bytes representing a molecule. GetNextToken¶ std::string *GetNextToken(OEPlatform::oeistream &ifs)=0 Returns a chunk of bytes representing the next molecule in the stream ifs. The stream of bytes is expected to be in the same file format as was passed to the OEGetTokenizer factory function used to construct this object. This method returns a new std::string, ownership is passed to the caller, that is expected to call delete. ParseTitle¶ std::string ParseTitle(const std::string &data) const=0 Return the string that OEMolBase::GetTitle would return for the molecule record data passed in. The implementations of this method are designed to do the minimal amount of parsing required to retrieve the title.
https://docs.eyesopen.com/toolkits/cpp/oechemtk/OEChemClasses/OETokenizerBase.html
2021-05-06T03:03:48
CC-MAIN-2021-21
1620243988725.79
[]
docs.eyesopen.com
The QFileSystemModel class provides a data model for the local filesystem. More... This class was introduced in Qt 4.4. This class provides access to the local filesystem, providing functions for renaming and removing files and directories, and for creating new directories. In the simplest case, it can be used with a suitable display widget as part of a browser or filter. QFileSystemModel can be accessed using the standard interface provided by QAbstractItemModel, but it also provides some convenience functions that are specific to a directory model. The fileInfo(), isDir(), fileName() and filePath() functions provide information about the underlying files and directories related to items in the model. Directories can be created and removed using mkdir(), rmdir(). Note: QFileSystemModel requires an instance of QApplication.FileSystemModel, QFileSystemModel uses a separate thread to populate itself so it will not cause the main thread to hang as the file system is being queried. Calls to rowCount() will return 0 until the model populates a directory. QFileSystemModel keeps a cache with file information. The cache is automatically kept up to date using the QFileSystemWatcher. See also Model Classes. This enum was introduced or modified in Qt 5.14. The Options type is a typedef for QFlags<Option>. It stores an OR combination of Option values. See also resolveSymlinks. This property holds whether files that don't pass the name filter are hidden or disabled This property is true by default Access functions: This property holds the various options that affect the model By default, all options are disabled. Options should be set before changing properties. This property was introduced in Qt 5.14. Access functions: See also setOption() and testOption(). Windows. By default, this property is true. Access functions: See also QFileSystemModel::Options. Constructs a file system model with the given parent. [signal]void QFileSystemModel::directoryLoaded(const QString &path) This signal is emitted when the gatherer thread has finished to load the path. This function was introduced in Qt 4.7. [signal]void QFileSystemModel::fileRenamed(const QString &path, const QString &oldName, const QString &newName) This signal is emitted whenever a file with the oldName is successfully renamed to newName. The file is located in in the directory path. [signal]void QFileSystemModel::rootPathChanged(const QString &newPath) This signal is emitted whenever the root path has been changed to a newPath. [virtual]QFileSystemModel::~QFileSystemModel() Destroys this file system model. [override virtual]bool QFileSystemModel::canFetchMore(const QModelIndex &parent) const Reimplements: QAbstractItemModel::canFetchMore(const QModelIndex &parent) const. [override virtual]int QFileSystemModel::columnCount(const QModelIndex &parent = QModelIndex()) const Reimplements: QAbstractItemModel::columnCount(const QModelIndex &parent) const. [override virtual]QVariant QFileSystemModel::data(const QModelIndex &index, int role = Qt::DisplayRole) const Reimplements: QAbstractItemModel::data(const QModelIndex &index, int role) const. [override virtual]bool QFileSystem). Handles the data supplied by a drag and drop operation that ended with the given action over the row in the model specified by the row and column and by the parent index. Returns true if the operation was successful. See also supportedDropActions(). [override virtual protected]bool QFileSystemModel::event(QEvent *event) Reimplements: QObject::event(QEvent *e). [override virtual]void QFileSystemModel::fetchMore(const QModelIndex &parent) Reimplements: QAbstractItemModel::fetchMore(const QModelIndex &parent). Returns the icon for the item stored in the model under the given index. Returns the QFileInfo for the item stored in the model under the given index. Returns the file name for the item stored in the model under the given index. Returns the path of the item stored in the model under the index given. Returns the filter specified for the directory model. If a filter has not been set, the default filter is QDir::AllEntries | QDir::NoDotAndDotDot | QDir::AllDirs. See also setFilter() and QDir::Filters. [override virtual]Qt::ItemFlags QFileSystemModel::flags(const QModelIndex &index) const Reimplements: QAbstractItemModel::flags(const QModelIndex &index) const. [override virtual]bool QFileSystemModel::hasChildren(const QModelIndex &parent = QModelIndex()) const Reimplements: QAbstractItemModel::hasChildren(const QModelIndex &parent) const. [override virtual]QVariant QFileSystemModel::headerData(int section, Qt::Orientation orientation, int role = Qt::DisplayRole) const Reimplements: QAbstractItemModel::headerData(int section, Qt::Orientation orientation, int role) const. Returns the file icon provider for this directory model. See also setIconProvider(). [override virtual]QModelIndex QFileSystemModel::index(int row, int column, const QModelIndex &parent = QModelIndex()) const Reimplements: QAbstractItemModel::index(int row, int column, const QModelIndex &parent) const. This is an overloaded function. Returns the model item index for the given path and column. Returns true if the model item index represents a directory; otherwise returns false. Returns the date and time when index was last modified. [override virtual]QMimeData *QFileSystemModel::mimeData(const QModelIndexList &indexes) const Reimplements: QAbstractItemModel::mimeData(const QModelIndexList &indexes) const. Returns an object that contains a serialized description of the specified indexes. The format used to describe the items corresponding to the indexes is obtained from the mimeTypes() function. If the list of indexes is empty, nullptr is returned rather than a serialized empty list. [override virtual]QStringList QFileSystemModel::mimeTypes() const Reimplements: QAbstractItemModel::mimeTypes() const. Returns a list of MIME types that can be used to describe a list of items in the model. Create a directory with the name in the parent model index. Returns the data stored under the given role for the item "My Computer". See also Qt::ItemDataRole. Returns a list of filters applied to the names in the model. See also setNameFilters(). [override virtual]QModelIndex QFileSystemModel::parent(const QModelIndex &index) const Reimplements: QAbstractItemModel::parent(const QModelIndex &index) const. Returns the complete OR-ed together combination of QFile::Permission for the index. Removes the model item index from the file system file system model and deletes the corresponding directory from the file system, returning true if successful. If the directory cannot be removed, false is returned. Warning: This function deletes directories from the file system; it does not move them to a location where they can be recovered. The currently set directory The currently set root path See also setRootPath() and rootDirectory(). [override virtual]int QFileSystemModel::rowCount(const QModelIndex &parent = QModelIndex()) const Reimplements: QAbstractItemModel::rowCount(const QModelIndex &parent) const. [override virtual]bool QFileSystemModel::setData(const QModelIndex &idx, const QVariant &value, int role = Qt::EditRole) Reimplements: QAbstractItemModel::setData(const QModelIndex &index, const QVariant &value, int role). Sets the directory model's filter to that specified by filters. Note that the filter you set should always include the QDir::AllDirs enum value, otherwise QFileSystemModel won't be able to read the directory structure. See also filter() and QDir::Filters. Sets the provider of file icons for the directory model. See also iconProvider(). Sets the name filters to apply against the existing files. See also nameFilters(). Sets the given option to be enabled if on is true; otherwise, clears the given option. Options should be set before changing properties. This function was introduced in Qt 5.14. See also options and testOption(). Sets the directory that is being watched by the model to newPath by installing a file system watcher on it. Any changes to files and directories within this directory will be reflected in the model. If the path is changed, the rootPathChanged() signal will be emitted. Note: This function does not change the structure of the model or modify the data available to views. In other words, the "root" of the model is not changed to include only files and directories within the directory specified by newPath in the file system. [override virtual]QModelIndex QFileSystemModel::sibling(int row, int column, const QModelIndex &idx) const Reimplements: QAbstractItemModel::sibling(int row, int column, const QModelIndex &index) const. Returns the size in bytes of index. If the file does not exist, 0 is returned. [override virtual]void QFileSystemModel::sort(int column, Qt::SortOrder order = Qt::AscendingOrder) Reimplements: QAbstractItemModel::sort(int column, Qt::SortOrder order). [override virtual]Qt::DropActions QFileSystemModel::supportedDropActions() const Reimplements: QAbstractItemModel::supportedDropActions() const. Returns true if the given option is enabled; otherwise, returns false. This function was introduced in Qt 5.14. See also options and setOption(). [override virtual protected]void QFileSystemModel::timerEvent(QTimerEvent *event) Reimplements: QObject::timerEvent(QTimerEvent *event). Returns the type of file index such as "Directory" or "JPEG file". © The Qt Company Ltd Licensed under the GNU Free Documentation License, Version 1.3.
https://docs.w3cub.com/qt~5.15/qfilesystemmodel
2021-05-06T04:49:04
CC-MAIN-2021-21
1620243988725.79
[]
docs.w3cub.com
My File Doesn't Open After Migration If the migration completes successfully, with no errors but the file is not open at the end, then it could be that your "file is Insecure". FileMaker Server can be configured to not allow files that do not have good passwords to be hosted. If your server is setup that way, then you file must have a user name and good password, or it will not stay open. Otto will open the file and signal that it was done, but FileMaker Server will immediately close it again. You can see if this is the problem by looking in the FileMaker Server event logs. If this is the problem the log will show a message that says the the "File Is Insecure" and can't be opened. Here is how FileMaker defines an "Insecure File" A secure file ...."
https://docs.ottofms.info/article/637-my-file-doesnt-open-after-migration
2021-05-06T04:12:34
CC-MAIN-2021-21
1620243988725.79
[]
docs.ottofms.info
Definition at line 150 of file RtmProfileList.py. 00001 """\ 00002 This example demonstrates the ListCtrl's Virtual List features. A Virtual list 00003 can contain any number of cells, but data is not loaded into the control itself. 00004 It is loaded on demand via virtual methods <code>OnGetItemText(), OnGetItemImage()</code>, 00005 and <code>OnGetItemAttr()</code>. This greatly reduces the amount of memory required 00006 without limiting what can be done with the list control itself. 00007 """ Definition at line 157 of file RtmProfileList.py.
https://docs.ros.org/en/indigo/api/openrtm_aist/html/namespaceRtmProfileList.html
2021-05-06T04:54:40
CC-MAIN-2021-21
1620243988725.79
[]
docs.ros.org
Difference between revisions of "Turist" Latest revision as of Policies Violating these following terms can permanently terminate your use of the VM Aside from the normal morals of using someone else's machine: - Don't distribute your private key. Only you are allowed to log into the server. - Don't fill up the disk, and if you do then it's your job to pester the other users to clean their shit - Don't fool with network resources (ever), incl. attacking internal COSI services - Don't perform DoS attacks - Don't download large files at ridiculous speeds (baring we get a software limit on the speed of the network interface of the vm) Administrative Stuff Adding a User Add the user adduser <username> Come up with some password and then forget it (it's not necessary to remember, the user can set it but it does nothing since you can't ssh over password (only public key) and there's no sudo for users. Add their SSH public key Log in as root, and make a file at /etc/ssh/authorized_keys/<username> Edit that file to contain their public key, and then profit. (contact the alumnus) .
http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Turist&diff=prev&oldid=8561&printable=yes
2020-05-25T09:01:29
CC-MAIN-2020-24
1590347388012.14
[]
docs.cslabs.clarkson.edu
Table of Contents Product Index Z Supernatural for Genesis 8 Male and Landon 8 is a brand new and unique pose collection. You can use them with various different scenes and scenarios as they are extremely versatile. All poses have been carefully adjusted for both Genesis 8 Male and Landon.
http://docs.daz3d.com/doku.php/public/read_me/index/58675/start
2020-05-25T09:05:47
CC-MAIN-2020-24
1590347388012.14
[]
docs.daz3d.com
Numbers Two types of numbers can be highlighted in modern JavaScript: Regular and bigInt. - Regular ones are stored in the format of 64-bit IEEE-754. It is also known as “double-precision floating-point numbers”. Developers use these numbers most in their practice. - BigInt numbers are used for representing integers of arbitrary length. We only use them in a few unique areas. The Ways of Writing a Number¶ Let’s say you want to write one billion. This is the most common way: let billion = 1000000000; In our practice, we commonly try to keep away from writing many zeroes. We prefer to write, for example, "1bn". Now, we will learn how to shorten a number in JavaScript. To shorten a number in JavaScript, you need to append the letter "e" to the number and specify the zeroes count. It will look like this: let million = 1e6; // 1 million, literally: 1 and 6 zeroes console.log(2.5e6); // 2.5 millions (2,500,000) var exponential = 2.56e3; console.log(exponential); // 2560 Now, imagine you want to write “one microsecond”. You need to do it as follows: let ms = 0.000001; If you are eager to avoid writing a large number of zeros, you should act like this: let ms = 1e-6; // six zeroes to the left from 1 console.log(ms); As 0.000001 includes 6 zeroes, it will be 1e-6. Hexadecimal, Binary and Octal Numbers¶ In Javascript, we use hexadecimal numbers, also known as Hex, for representing colors, encoding characters, and a lot more. Fortunately, there is a shorter way to write them: 0x and then the number. Here is an example: let hex = 0xfff; console.log(hex); // 4095 console.log(0xff); // 255 console.log(0xFF); // 255 (the same, case doesn't matter) Here is an example of the octal numbers : var octal = 030; console.log(octal); // 24 Generally, developers use binary and octal numeral systems more seldom. These numeral systems are also supported using the following prefixes: 0b and 0o. For instance: let a = 0b11111111; // binary form of 255 let b = 0o377; // octal form of 255 console.log(a == b); // true toString (Base)¶ This method returns a string representation of num, with a particular base, which can vary from 2 to 36. But the default is 10. For example: let num = 255; console.log(num.toString(16)); // ff console.log(num.toString(2)); // 11111111 Rounding¶ A typical operation while working with numbers is rounding. Below you can find several built-in functions used for rounding: Math.floor With the help of this function, you can easily round down. For example, 6.2 becomes 6, -2.2 becomes-3. Math.ceil This function does the opposite. It rounds up the numbers. For example, 3.1 will become 4, -2.2 will become -2 . Math.round Using this function will round to the nearest integer. For instance, 3.1 becomes 3, 5.6 becomes 6, and -2.2 becomes -2. Math.trunc This function removes anything after the decimal point. For example, 6.1 will become 6. Imprecise Calculation¶ In-64 bit format IEEE-754, there are 64 bits for storing a number. 52 of them are used for storing the digits, 11 of them - for the position of the decimal point, and 1- for the sign. In case the number is too large, it will overflow the 64-bit storage and will potentially give an infinity. For better understanding, look at this example: console.log(2e5); // Infinity The loss of precision is also a common thing. Check out the following (falsy!) test: console.log(0.1 + 0.2 == 0.3); // false Tests: isFinite and isNan¶ First of all, let’s check out the following two unique numeric values: - Infinity (and -Infinity). This numeric value is higher (less) than anything else. - NaN shows an error. It is vital to know that these are not standard numbers. Therefore, you need to use special functions to check for them. The isNaN(value) converts its argument into a number. Afterward, tests whether it’s isNaN(value) or not. - isFinite(value) will convert its argument to a number and return true in case the number is regular, not NaN/Infinity/-Infinity. For instance: console.log(isNaN(NaN)); // true console.log(isNaN("str")); // true You can wonder why it is not possible to use the comparison of === NaN. Just because the NaN is unique in that will not equal anything, even itself: console.log(NaN === NaN); // false Let’s have a look at this example: console.log(isFinite("23")); // true console.log(isFinite("str")); // false, because a special value: NaN console.log(isFinite(Infinity)); // false, because a special value: Infinity There are cases when developers use isFinite for validating whether a string value is a regular number, like this: let num = +prompt("Enter a number", ''); console.log(isFinite(num));// if you don't enter Infinity, -Infinity or Nan, then will be true ParseInt and ParseFloat¶ A numeric conversion that uses a + or Number() is strict. If a value is not exactly a number, it will fail as follows: console.log(+"200px"); // NaN The spaces at the start or the end of the string are the only exception, as they are ignored. By the way, in CSS, you can meet values in units, like "20pt" or "50px". Moreover, there are many countries where the symbol of the currency goes after the amount. For example, "19$". If you want to extract a numeric value out of it, you can use parseInt and parseFloat. These functions will read from a string to the point they can’t. If an error occurs, the number that has been gathered will be returned. The parseInt will return an integer, while parseFloat- the floating-point number, as follows: console.log(parseInt('50px')); // 50 console.log(parseFloat('22.5em')); // 22.5 console.log(parseInt('20.4$')); // 20, only the integer part is returned console.log(parseInt('22.2')); // 22, only the integer part is returned console.log(parseFloat('22.2.4')); // 22.2, the second point stops the reading In some situations, parseInt and parseFloat functions will not return the NaN. It can happen when no digits are found. Check out the following example: console.log(parseInt('a13')); // NaN, the first symbol stops the process The parseInt() function has an alternative parameter which specifies the numeral system base. So, parseInt may parse the strings of hex numbers, binary numbers, etc: console.log(parseInt('0xff', 16)); // 255 console.log(parseInt('ff', 16)); // 255, without 0x also works console.log(parseInt('2n9', 36)); // 3429
https://www.w3docs.com/learn-javascript/numbers.html
2020-03-28T20:53:43
CC-MAIN-2020-16
1585370493120.15
[]
www.w3docs.com
If you are looking to save some time by copying shifts, you can achieve this one of two ways. The first is by editing a current shift or creating a new one. You will see a dropdown that says "Copy shift to" where you'll be able to select which days you'd like the shift to be copied to. After you've chosen your shifts select save. You will need to Publish the shifts once done. Alternatively, you can choose to copy the shift schedule from the previous week by selecting Options followed by Copy Previous Week. This will not copy Time Off. You will then need to Publish the shift(s) once copied.
https://docs.buddypunch.com/en/articles/2664089-how-do-i-copy-shifts
2020-03-28T21:38:06
CC-MAIN-2020-16
1585370493120.15
[array(['https://downloads.intercomcdn.com/i/o/98882729/5ea0f0b3a772674c5a885c2e/copyshift.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/98882959/3d2cb180f52348bbaee9c031/copy_week.png', None], dtype=object) ]
docs.buddypunch.com
WireGuard¶ WireGuard is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. See for more information. Configuration¶ WireGuard requires the generation of a keypair, a private key which will decrypt incoming traffic and a public key, which the peer(s) will use to encrypt traffic. Generate keypair¶ It generates the keypair, that is its public and private part and stores it within VyOS. It will be used per default on any configured WireGuard interface, even if multiple interfaces are being configured. It shows the public key which needs to be shared with your peer(s). Your peer will encrypt all traffic to your system using this public key. Generate named keypair¶ Named keypairs can be used on a interface basis, if configured. If multiple WireGuard interfaces are being configured, each can have their own keypairs. The commands below will generate 2 keypairs, which are not related to each other. [email protected]:~$ generate wireguard named-keypairs KP01 [email protected]:~$ generate wireguard named-keypairs KP02 Interface configuration¶ The next step is to configure your local side as well as the policy based trusted destination addresses. If you only initiate a connection, the listen port and endpoint is optional, if you however act as a server and endpoints initiate the connections to your system, you need to define a port your clients can connect to, otherwise it’s randomly chosen and may make it difficult with firewall rules, since the port may be a different one when you reboot your system. You will also need the public key of your peer as well as the network(s) you want to tunnel (allowed-ips) to configure a WireGuard tunnel. The public key below is always the public key from your peer, not your local one. local side set interfaces wireguard wg01 address '10.1.0.1/24' set interfaces wireguard wg01 description 'VPN-to-wg02' set interfaces wireguard wg01 peer to-wg02 allowed-ips '10.2.0.0/24' set interfaces wireguard wg01 peer to-wg02 endpoint '192.168.0.142:12345' set interfaces wireguard wg01 peer to-wg02 pubkey 'XMrlPykaxhdAAiSjhtPlvi30NVkvLQliQuKP7AI7CyI=' set interfaces wireguard wg01 port '12345' set protocols static interface-route 10.2.0.0/24 next-hop-interface wg01 Note The endpoint must be an IP and not a fully qualified domain name (FQDN). Using a FQDN will result in unexpected behavior. The last step is to define an interface route for 10.2.0.0/24 to get through the WireGuard interface wg01. Multiple IPs or networks can be defined and routed, the last check is allowed-ips which either prevents or allows the traffic. To use a named key on an interface, the option private-key needs to be set. set interfaces wireguard wg01 private-key KP01 set interfaces wireguard wg02 private-key KP02 The command run show wireguard keypairs pubkey KP01 will then show the public key, which needs to be shared with the peer. remote side set interfaces wireguard wg01 address '10.2.0.1/24' set interfaces wireguard wg01 description 'VPN-to-wg01' set interfaces wireguard wg01 peer to-wg02 allowed-ips '10.1.0.0/24' set interfaces wireguard wg01 peer to-wg02 endpoint '192.168.0.124:12345' set interfaces wireguard wg01 peer to-wg02 pubkey 'u41jO3OF73Gq1WARMMFG7tOfk7+r8o8AzPxJ1FZRhzk=' set interfaces wireguard wg01 port '12345' set protocols static interface-route 10.1.0.0/24 next-hop-interface wg01 Assure that your firewall rules allow the traffic, in which case you have a working VPN using WireGuard wg01# ping 10.2.0.1 PING 10.2.0.1 (10.2.0.1) 56(84) bytes of data. 64 bytes from 10.2.0.1: icmp_seq=1 ttl=64 time=1.16 ms 64 bytes from 10.2.0.1: icmp_seq=2 ttl=64 time=1.77 ms wg02# ping 10.1.0.1 PING 10.1.0.1 (10.1.0.1) 56(84) bytes of data. 64 bytes from 10.1.0.1: icmp_seq=1 ttl=64 time=4.40 ms 64 bytes from 10.1.0.1: icmp_seq=2 ttl=64 time=1.02 ms An additional layer of symmetric-key crypto can be used on top of the asymmetric crypto, which is optional. wg01# run generate wireguard preshared-key rvVDOoc2IYEnV+k5p7TNAmHBMEGTHbPU8Qqg8c/sUqc= Copy the key, as it is not stored on the local file system. Make sure you distribute that key in a safe manner, it’s a symmetric key, so only you and your peer should have knowledge of its content. wg01# set interfaces wireguard wg01 peer to-wg02 preshared-key 'rvVDOoc2IYEnV+k5p7TNAmHBMEGTHbPU8Qqg8c/sUqc=' wg02# set interfaces wireguard wg01 peer to-wg01 preshared-key 'rvVDOoc2IYEnV+k5p7TNAmHBMEGTHbPU8Qqg8c/sUqc=' Road Warrior Example¶ With WireGuard, a Road Warrior VPN config is similar to a site-to-site VPN. It just lacks the endpoint address. In the following example, the IPs for the remote clients are defined in the peers. This would allow the peers to interact with one another. wireguard wg0 { address 10.172.24.1/24 address 2001:DB8:470:22::1/64 description RoadWarrior peer MacBook { allowed-ips 10.172.24.30/32 allowed-ips 2001:DB8:470:22::30/128 persistent-keepalive 15 pubkey F5MbW7ye7DsoxdOaixjdrudshjjxN5UdNV+pGFHqehc= } peer iPhone { allowed-ips 10.172.24.20/32 allowed-ips 2001:DB8:470:22::30/128 persistent-keepalive 15 pubkey BknHcLFo8nOo8Dwq2CjaC/TedchKQ0ebxC7GYn7Al00= } port 2224 } The following is the config for the iPhone peer above. It’s important to note that the AllowedIPs setting directs all IPv4 and IPv6 traffic through the connection. [Interface] PrivateKey = ARAKLSDJsadlkfjasdfiowqeruriowqeuasdf= Address = 10.172.24.20/24, 2001:DB8:470:22::20/64 DNS = 10.0.0.53, 10.0.0.54 [Peer] PublicKey = RIbtUTCfgzNjnLNPQ/ulkGnnB2vMWHm7l2H/xUfbyjc= AllowedIPs = 0.0.0.0/0, ::/0 Endpoint = 192.0.2.1:2224 PersistentKeepalive = 25 This MacBook peer is doing split-tunneling, where only the subnets local to the server go over the connection. [Interface] PrivateKey = 8Iasdfweirousd1EVGUk5XsT+wYFZ9mhPnQhmjzaJE6Go= Address = 10.172.24.30/24, 2001:DB8:470:22::30/64 [Peer] PublicKey = RIbtUTCfgzNjnLNPQ/ulkGnnB2vMWHm7l2H/xUfbyjc= AllowedIPs = 10.172.24.30/24, 2001:DB8:470:22::/64 Endpoint = 192.0.2.1:2224 PersistentKeepalive = 25 Operational commands¶ Show interface status [email protected]# run show interfaces wireguard wg01 interface: wg1 description: VPN-to-wg01 address: 10.2.0.1/24 public key: RIbtUTCfgzNjnLNPQ/asldkfjhaERDFl2H/xUfbyjc= private key: (hidden) listening port: 53665 peer: to-wg02 public key: u41jO3OF73Gq1WARMMFG7tOfk7+r8o8AzPxJ1FZRhzk= latest handshake: 0:01:20 status: active endpoint: 192.168.0.124:12345 allowed ips: 10.2.0.0/24 transfer: 42 GB received, 487 MB sent persistent keepalive: every 15 seconds RX: bytes packets errors dropped overrun mcast 45252407916 31192260 0 244493 0 0 TX: bytes packets errors dropped carrier collisions 511649780 5129601 24465 0 0 0 Show public key of the default key Show public key of a named key Delete wireguard keypairs
https://docs.vyos.io/en/latest/vpn/wireguard.html
2020-03-28T21:10:24
CC-MAIN-2020-16
1585370493120.15
[]
docs.vyos.io
pg_statistic A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation. pg_statistic The pg_statistic system catalog table stores statistical data about the contents of the database. Entries are created by ANALYZE and subsequently used by the query optimizer. There is one entry for each table column that has been analyzed. Note that all the statistical data is inherently approximate, even assuming that it is up-to. When stainherit = false, there is normally one entry for each table column that has been analyzed. If the table has inheritance children, Greenplum Database creates a second entry with stainherit = true. This row represents the column's statistics over the inheritance tree, for example, statistics for the data you would see with SELECT column FROM table*, whereas the stainherit = false row represents the results of SELECT column FROM ONLY. Statistical information about a table's contents should be considered sensitive (for example: minimum and maximum values of a salary column). pg_stats is a publicly readable view on pg_statistic that only exposes information about those tables that are readable by the current user.
https://gpdb.docs.pivotal.io/6-0/ref_guide/system_catalogs/pg_statistic.html
2020-03-28T22:01:59
CC-MAIN-2020-16
1585370493120.15
[]
gpdb.docs.pivotal.io
ONTAP cluster and SnapMirror requirements Before connecting the NetApp Data Availability Services (NDAS) app to ONTAP storage, you must prepare the ONTAP target cluster and the ONTAP source clusters that contain the volumes you are backing up. You must have ONTAP administrator privileges to configure these pre-requisites. Note: The AWS cloud and StorageGRID solutions have the same ONTAP and SnapMirror requirements, except that a CA certificate is required for the StorageGRID solution. ONTAP requirements for NetApp Data Availability Services: The ONTAP clusters containing the source and target SnapMirror replicated volumes must be in a cluster peer relationship. ONTAP 9 Cluster and SVM Peering Express Guide SnapMirror must be licensed on the source and target ONTAP clusters. It is not necessary to configure SnapMirror relationships before deploying NetApp Data Availability Services. NetApp Data Availability Services can configure a source-target-cloud relationship. However, NetApp Data Availability Services automatically discovers existing SnapMirror relationships. ONTAP 9 Data Protection Power Guide: Managing SnapMirror volume replication Note: If existing SnapMirror relationships are being protected, only SnapMirror relationships of type MirrorAndVault and MirrorAll can be extended to cloud. SnapMirror relationships of type MirrorLatest cannot be extended to cloud. One of the following ONTAP 9 releases must be running on the secondary (target) cluster: ONTAP 9.7 or a later ONTAP 9.7x version (beginning with release 1.1.2) ONTAP 9.6P1 or a later ONTAP 9.6x version ONTAP 9.5P6 or a later ONTAP 9.5x version ONTAP version 9.3 or later must be running on all peered primary (source) clusters. For each of the node in the target ONTAP cluster, the default IP space of each intercluster LIF must have external internet access.. The ONTAP clusters must have their system time synchronized with UTC (Coordinated Universal Time). To configure or verify Network Time Protocol (NTP) on an ONTAP cluster, use the cluster time-service ntpcommand. ONTAP 9 System Administration Reference: Managing the cluster time For StorageGRID implementations, you must install a StorageGRID CA certificate on the secondary (target) cluster. Reverting or downgrading ONTAP to a version not supported by NetApp Data Availability Services requires stopping NDAS protection on all protected volumes, running an ONTAP cleanup script, and removing NDAS registration from System Manager. If you need to revert or downgrade ONTAP after configuring NetApp Data Availability Services, contact your Support representative. Concurrent transfer limits for Copy to Cloud relationships Copy to Cloud relationships include two kinds of Snapshot transfers; Snapshot copy contents and metadata, which is used for cataloging of files in a backup. In ONTAP 9.5 and later, the concurrent SnapMirror transfers per node maximum is limited to 100. Out of these 100, a maximum of 32 can be Copy to Cloud data transfers in release 1.0. On nodes with DP_Optimized (DPO) licenses, the ONTAP concurrent transfer limit increases to 200; however, Copy to Cloud data transfers are still limited to 32 in this release. In addition to data transfers, a node running ONTAP 9.5 (DPO or non-DPO) can have up to 32 Copy to Cloud Metadata transfers running in parallel.
https://docs.netapp.com/us-en/netapp-data-availability-services/concept_cluster_and_snapmirror_requirements.html
2020-03-28T21:44:54
CC-MAIN-2020-16
1585370493120.15
[]
docs.netapp.com
examples package: Using the Alias property Gloop gives developers the ability to rename Gloop model properties during serialization1 and deserialization2 using the Alias property. With this property, you can: - name a Gloop model property after a Java/Groovy keyword (e.g. int, boolean, while); - have it contain special characters that are invalid in variable names in Java (e.g. !%toro*6, +hello); or - change it to something else so that it matches the requirements of your API. The examples package demonstrates how the Alias property works via the aliasForSerializing.PrintPerson.gloop service. When this service is executed, it will transform the input to XML, JSON, and YAML; and then log the result to the console. The resulting XML, JSON, and YAML strings will show the properties that have been renamed with their aliases. Related articles Please see the following articles for more information: Try it! In the Navigator, expand the examples package and navigate to the code folder, then expand the aliasForSerializing package. This package contains two files, as shown below: Running the PrintPerson.gloop service will provide an output similar to that shown below. Output of PrintPerson.gloop But what exactly are you looking at and what's so special about this service? Explanation This example shows how you can serialize a JSON, XML, or YAML attribute/element/property to any name you like. A Gloop model, when provided with an invalid property name, will attempt to make it a valid Gloop and Groovy name. The desired name will be configured as the property's Alias instead; and during serialization or deserialization to JSON, XML or YAML, Gloop will use the Alias property. The Person.model Gloop model contains invalid property names, each assigned an alias that are used when it gets serialized. After serialization, instead of the Gloop-assigned property names, the value of the Alias property will be used instead to write the output. When a property has an Alias, it will appear in curved brackets after the property itself in Gloop, as shown below: Generate models from an existing source You can generate Gloop models from an existing Gloop model, a JSON, XML or YAML string or file, or a JSON or XML schema.
https://docs.torocloud.com/martini/latest/quick-start/resources/examples-package/alias-property/
2020-03-28T21:40:06
CC-MAIN-2020-16
1585370493120.15
[array(['../../../../placeholders/img/coder/gloop-model-alias.png', 'Screenshot showing a Gloop property with an alias'], dtype=object)]
docs.torocloud.com
Feature: #77900 - Introduce TypeScript for the core¶ See Issue #77900 Why we use TypeScript in the core?¶ TypeScript. Coding Guidelines & Best practice¶ /// <amd-dependency informs the compiler about a non-TS module dependency that needs to be injected in the resulting module’s require call. The amd-dependency has a property name which allows passing an optional name for an amd-dependency: /// <amd-dependency An example: /// <amd-dependency will be compiled to: define(["require", "exports", "TYPO3/CMS/Core/Contrib/jquery.minicolors"], function (require, exports, minicolors) { A very simple example is the EXT:backend/Resources/Private/TypeScript/ColorPicker.ts file. TypeScript Linter¶ The most rules for TypeScript are defined in the rulesets which are checked by the TypeScript Linter. The core provides a configuration file and grunt tasks to ensure a better code quality. For this reason we introduce a new grunt task, which first run the Linter on each TypeScript file before starting the compiler. So if your TypeScript does not follow the rules, the task will fail. The idea is to write clean code, else it will not be compiled. Additional Rules¶ For the core we have defined some additional rules which you should know, because not all of them can be checked by the Linter yet: - Always define types and return types, also if TypeScript provides a default type. [checked by Linter] - Variable scoping: Prefer letinstead of var. [checked by Linter] - Optional properties in interfaces are possible but a bad style, this is not allowed for the core. [NOT checked by Linter] - An interface will never extend a class. [NOT checked by Linter] - Iterables: Use for (i of list)if possible instead of for (i in list)[NOT checked by Linter] - The implementskeyword is required for any usage, also if TypeScript does not require it. [NOT checked by Linter] - Any class or interface must be declared with “export” to ensure re-use or export an instance of the object for existing code which can’t be updated now. [NOT checked by Linter] Contribution workflow¶ # Change to Build directory cd Build # Install dependencies npm install # Install typings for the core grunt typings # Check with Linter and compile ts files from sysext/*/Resources/Private/TypeScript/*.ts grunt scripts # File watcher, the watch task also check for *.ts files grunt watch The grunt task compiles each TypeScript file (.ts) to a JavaScript file (.js) and produces an AMD module.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/8.4/Feature-77900-IntroduceTypeScriptForTheCore.html
2020-03-28T20:13:24
CC-MAIN-2020-16
1585370493120.15
[]
docs.typo3.org
RCDay2018 From UABgrid Documentation (Difference between revisions) Revision as of 14:47, 17 October 2018 Fall 2018 Research Computing Day -- Use Cases and Strategic Engagement Date: November, 7 2018 Venue: Hill Student Center, Alumni Theater Open to all UAB faculty, staff, and students. Registration is free but seating is limited, so please register here to attend.
https://docs.uabgrid.uab.edu/tgw/index.php?title=RCDay2018&oldid=5832&diff=prev&printable=yes
2020-03-28T20:58:12
CC-MAIN-2020-16
1585370493120.15
[]
docs.uabgrid.uab.edu
Configuration¶ This section documents additional configuration options of OpenLMI-Software not covered by Configuration files. Note All additional options listed here are specific to python implementation. C provider ignores them. Apart from main configuration file /etc/openlmi/openlmi.conf, all software related settings are also read from /etc/openlmi/software/software.conf They take precedence over the settings from main configuration file. Options¶ Follows a list of valid options with sections enclosed in square brackets. yum options¶ Options related to the use of yum API and its configuration. - [Yum] LockWaitInterval : defaults to 0.5 - Number of seconds to wait before next try to lock yum package database. This applies, when yum database is locked by another process. - [Yum] FreeDatabaseTimeout = 60 : defaults to 60 - Number of seconds to keep package cache in memory after the last use (caused by user request). Package cache takes up a lot of memory. Log options¶ - [Yum] FileConfig : defaults to empty string - This option overrides any other logging option. It provides complete control over what is logged, when and where. It’s a path to a logging configuration file with format specified in Configuration File Format. Path can be absolute or relative. In the latter case it’s relative to a directory of this configuration file. YumWorkerLog options¶ This section is targeted mostly on developers of OpenLMI-Software provider. yum API is accessed exclusively from separated process called YumWorker. Because separated process can not send its log messages to CIMOM, its logging configuration needs to be configured extra. - [YumWorkerLog] OutputFile : defaults to empty string - This is an absolute or relative path to a file, where the logging will be done. Without this option set, logging of YumWorker is disabled (assuming the [YumWorkerLog] FileConfig option is also unset). - [YumWorkerLog] Level : defaults to DEBUG - This has generally the same meaning as Level in previous section (Log options). Except this affects only logging of YumWorker process. - [YumWorkerLog] FileConfig : defaults to empty string - Similar to the FileConfig option in Log options. This overrides any other option in this section.
https://openlmi.readthedocs.io/en/latest/openlmi-providers/software/configuration.html
2020-03-28T21:12:35
CC-MAIN-2020-16
1585370493120.15
[]
openlmi.readthedocs.io
What are projects and milestones? A project is between a single freelancer and client, and is divided into intermediary steps we call milestones. Each milestone has: - a deliverable (the work the freelancer has to submit) - a deadline (when they have to submit it for) - a price (how much they will get paid for it) 👉 Milestones take place once after the other within a project and cannot overlap.
https://docs.freelancerprotocol.com/defs.html
2020-03-28T20:06:07
CC-MAIN-2020-16
1585370493120.15
[]
docs.freelancerprotocol.com
Last updated: 26 Feb 2020. Changes made to the CTAS installation guide and iTDS migration guide. As a part of our continuous maintenance endeavors, we have released new server updates for the iTDS and CTAS. You will need to update your Maltego servers to ensure that your server license certificate continues to work and Maltego Desktop clients are able to connect to on-premise servers. If this update does not take place before the beginning of May, your servers will stop working. What to do: You need to update or re-install you servers. The update process for your server and it's certificate* is detailed in the attached documents below. You can download this for your usage and circulation. *Where to find your server certificates? The email you received will contain your new delivery document which contains the link to your new server license certificates. Timeline and important dates: - Jan. 30, 2020: Release of Maltego Desktop 4.2.9 (update recommended) - Feb. 5, 2020: Release of new server images and server certificates - May 11, 2020: Deadline to deploy new servers (incl. new license certificate) We understand this is a critical task and are here to support you. Please do not hesitate to contact us as soon as you need help. How to reach us: Reply to the email you have received OR send us a support ticket by clicking on "new support ticket" above. Reach out to [email protected] and your request will be escalated immediately Once you have updated your server successfully, please let us know for our records. We will be reaching out to you soon If you own a server but have not yet recieved an email yet. Please let us know if there are urgencies so we can expedite your information to you.
https://docs.maltego.com/support/solutions/articles/15000031417-server-migration-2020-mandatory-update-to-all-maltego-itds-and-ctas-on-premise-servers
2020-03-28T20:27:29
CC-MAIN-2020-16
1585370493120.15
[]
docs.maltego.com
Using an Access 2007 database with ASP.NET 3.5 and Expression Web 3. First, your system and server must have the 2007 Office System Driver: Data Connectivity Components installed. Many ASP.NET hosts have this installed, such as DiscountASP. Once this is installed, you’re ready to start working. Drag a SqlDataSource control from the Toolbox panel and drop it into the Design view of your page. (The SqlDataSource control is under the Data category in the section of ASP.NET controls,) In the Design view of your page, next to the SqlDAtaSource control, click Configure Data Source. On the Choose your Data Connection screen, click New Connection. In the Choose Data Source screen, set Data Source to <other>, and Data Provider to .NET Framework Data Provider for OLE DB, and click OK. In the Connections Properties dialog, click the OLE DB Provider menu and select Microsoft Office 12.0 Access Database Engine OLE DB Provider. In the Server or file name box, enter the full path to your database, and then click Test Connection”. If your test connection succeeded, you’re on the right path so far! Click OK in the Connection Properties dialog, and then click Next in the Configure Data Source dialog.. Save your new Connection String and click Next. Now it’s time to configure the query for your database connection. Select the columns you want to display from your database, and click Next. In my example, I selected Amount, DonorName, and CampaignName. Click Test Query. If the test was successful, then click Finish.
https://docs.microsoft.com/en-us/archive/blogs/xweb/using-an-access-2007-database-with-asp-net-3-5-and-expression-web-3
2020-03-28T22:37:17
CC-MAIN-2020-16
1585370493120.15
[]
docs.microsoft.com
Table of Contents Infrastructure Workflow - Deploy Kubernetes with Hub CLI Updated by Agile Stacks In this tutorial, you are going to deploy a Kubernetes cluster on AWS using Spot instances and cluster autoscaling for cost optimization. The cluster will be integrated with DNS, TLS via Let's Encrypt, and SSO via Okta. Hub CLI is a powerful and easy to use tool for infrastructure automation. It is also an interface to SuperHub (API). SuperHub is an infrastructure automation service to deploy and manage software stacks - in the cloud and on-prem. You can manage, configure, and implement change control for your infrastructure-as-code using the SuperHub. Install Hub CLI First, install Hub CLI binary: On Mac curl -O mv hub.darwin_amd64 hub chmod +x hub sudo mv hub /usr/local/bin hub extensions install On Linux curl -O mv hub.linux_amd64 hub chmod +x hub sudo mv hub /usr/local/bin hub extensions install Login into SuperHub: hub login -u [email protected] Export environment variable `HUB_TOKEN` in your shell, or add export HUB_TOKEN=... to your ~/.bash_profile file. Onboard Your Cloud Account SuperHub is linked to the AWS cloud account via credentials supplied by you. When Hub CLI works in the local mode it talks to the cloud directly - by using ~/.aws/credentials. To use a named AWS profile for multiple Hub CLI commands, you can set the AWS_PROFILE and the HUB_TOKEN environment variables at the command line. export AWS_PROFILE=profile1 export HUB_TOKEN=1896390...a009 When Hub CLI drives SuperHub via its API, it must setup several essential cloud resources in the account: - An S3 bucket for Hub CLI and Terraform state, for backups; - DNS zone that will be linked to superhub.iounder a subdomain of your choice; - A cross-account IAM role that will be used by SuperHub to securely deploy resources in your cloud account. SuperHub do not store your AWS keys. Choose a cloud account name. This will become the subdomain under which all of your Kubernetes services are hosted. The name will become a subdomain under superhub.io For the rest of this example, we will use my-domain-01.superhub.io Please replace it with your chosen name in the rest of the examples. superhub.ioDNS domain. This is to ensure that there is a valid DNS zone to host all of the Kubernetes clusters that are created. The domain name can be changed later, but for now, the fastest approach is to use superhub.io hub api cloudaccount onboard -w my-domain-01.superhub.io aws us-east-2 Consult hub api cloudaccount onboard --help for additional details. You can list cloud accounts via the following command: hub api cloudaccount get Create an Environment The environment is a SuperHub umbrella. Create Your Template SuperHub approach of repeatable infrastructure-as-code revolves around Templates - a collection of automation scripts to deploy interconnected software Components. A unit of deployment is a template; also, every component could be redeployed or undeployed individually. Start by copy-pasting this json into your own template.json file. Feel free to name the template whatever you like, but remember it, because we'll use it in the next steps. template.json { "name": "kubernetes-1", "description": "Kubernetes with Let's Encrypt", "kind=platform" ], "stack": "k8s:1", "componentsEnabled": [ "traefik", "dex", "cluster-autoscaler", "cert-manager", "kube-dashboard2" ], "verbs": [ "deploy", "undeploy" ], "parameters": [ {"name": "component.kubernetes.etcd.count", "value": 1}, {"name": "component.kubernetes.etcd.size", "value": "t3.micro"}, {"name": "component.kubernetes.etcd.spotPrice", "value": 0.20}, {"name": "component.kubernetes.master.size", "value": "t3.medium"}, {"name": "component.kubernetes.master.count", "value": 1}, {"name": "component.kubernetes.master.spotPrice", "value": 0.20}, {"name": "component.kubernetes.worker.size", "value": "m5.large"}, {"name": "component.kubernetes.worker.count", "value": 2}, {"name": "component.kubernetes.worker.spotPrice", "value": 0.20}, {"name": "component.kubernetes-dashboard.rbac.kind", "value": "admin"}, {"name": "component.ingress.urlPrefix", "value": "app"}, {"name": "component.ingress.ssoUrlPrefix", "value": "apps"} ] } Create the template in the SuperHub service using the template file you just created. $ hub api template create < template.json Here is a template named kubernetes-1 which provides several software components: - Traefik as ingress controller - Dex for SSO with Okta - Kubernetes Cluster Autoscaler - Cert-manager for TLS with Let's Encrypt - Kubernetes Dashboard v2 The template we created is based on k8s:1 Super Template that provides boilerplate and default configuration. Kubernetes will be deployed on three AWS spot instances: one Etcd node on t3.micro , one Master node on t3.medium, and one Worker node on m5.large . Kubernetes Dashboard is given admin permissions. Ingress subdomain for the the cluster deployed from this template is app; apps for SSO protected ingresses. Initialize the template: hub api template init kubernetes-1 This command creates a Git repository to host the template source code and populates it with automation scripts, components, and parameters. The repository is accessible via https URL, and you can clone a local copy on your computer using standard Git tools. To list available templates: hub api template get kubernetes-1 Deploy Your Stack Instance This step creates the Kubernetes cluster in your AWS account, and populates it with the components specified in the template. We call it a "Stack Instance" because it is an instantiation of a Stack Template. You can create 1 or 1,000 instances of the same template. They can also be upgraded as their constituent components are upgraded. Create stack instance with defaults from the template: $ hub api instance create <<EOF { "name": "cluster-01", "environment": "Dev01", "template": "kubernetes-1" } EOF The stack instance is created in initial state. <stack instance name>.<cloud account name>.superhub.io Finally, deploy! hub api instance deploy -w cluster-01.my-domain-01.superhub.io my-domain-01with the name of cloud account you specified in Onboard your Cloud Account step. The command will show the deployment log and will exit after the automation task is completed on SuperHub side. 2020/02/18 20:28:38 Completed deploy on kubernetes-1 with components stack-k8s-aws, tiller, automation-tasks-namespace, cert-manager, traefik, dex, cluster-autoscaler, kube-dashboard2 2020/02/18 20:28:38 Wrote state `hub.yaml.state` 2020/02/18 20:28:38 Wrote state `s3://agilestacks.im-demo01.superhub.io/cluster-01.im-demo01.superhub.io/hub/kubernetes-1/hub.state` 2020/02/18 20:28:38 Syncing Stack Instance state to SuperHub ===> 12:28:38 cluster-01 [2103] stackInstance update success 2020/02/18 20:28:38 All warnings combined: Error query parameter `component.kubernetes.bastionHost` in environment `342`, stack instance `2103`: Unable to retrieve parameter `component.kubernetes.bastionHost`: `value` not set Error query parameter `component.kubernetes.bastionHost|stack-k8s-aws` in environment `342`, stack instance `2103`: Unable to retrieve parameter `component.kubernetes.bastionHost`: `value` not set Error query parameter `component.kubernetes.master.elb` in environment `342`, stack instance `2103`: Unable to retrieve parameter `component.kubernetes.master.elb`: `value` not set Error query parameter `component.kubernetes.master.elb|stack-k8s-aws` in environment `342`, stack instance `2103`: Unable to retrieve parameter `component.kubernetes.master.elb`: `value` not set ===> 12:28:43 cluster-01 [2103] stackInstance update success ===> 12:28:46 cluster-01 [2103] stackInstance deploy success Inspect the instance: hub api instance get cluster-01.my-domain-01.superhub.io Visit Kubernetes Dashboard at Also, you can view the Traefik Dashboard at which will show you the services that are hosted in your K8s cluster. From here you can explore SuperHub via hub api commands. Or check out the hub CLI documentation Alternatively, go to to view infrastructure resources in the UI. Navigate to the stack instance using Stacks > Clusters > List menu. You can learn more about the Control Plane here Cleanup the Cloud Resources In the previous step you have deployed several resources in AWS, including ec2 instances, s3 buckets, load balancers, and security groups. Upon completion of this tutorial you may want to delete these resources to save cloud costs. hub api instance undeploy -w cluster-01.my-domain-01.superhub.io hub api instance delete cluster-01.my-domain-01.superhub.io hub api template delete kubernetes-1 hub api environment delete Dev01 hub api cloudaccount delete -w my-domain-01.superhub.io The stack instance is undeployed, and all related AWS resources are automatically cleaned up. You can save the stack template kubernetes-1 in your environment so you can easily redeploy it later. Conclusion Congratulations! You have completed the tutorial and learned how to create a Kubernetes cluster with Hub CLI. The cluster you created provides several add-ons that are essential for any application deployment: dashboard, ingress, DNS, load balancer, SSL certificate manager, SSO authentication, and cluster auto-scaler. In the next tutorial (coming soon), you can learn about how to use SuperHub to deploy additional components as overlay stacks.
https://docs.agilestacks.com/article/z0draxk06r-infrastructure-workflow-deploy-kubernetes-stack-hub-cli
2020-03-28T20:16:59
CC-MAIN-2020-16
1585370493120.15
[array(['https://files.helpdocs.io/5dz7tj1wpg/articles/z0draxk06r/1582056216057/image.png', None], dtype=object) ]
docs.agilestacks.com
LMISubscription¶ - class lmi.shell.LMISubscription.LMISubscription(client, cim_filter, cim_handler, cim_subscription, permanent)¶ Class holding information about a indication subscription. - delete()¶ Cleans up the indication subscription. First it deletes subscription object. If LMISubscription._cim_filter_tpl contains a flag, that the filter object was created temporarily, it will be deleted by this call. If LMISubscription._cim_handler_tlp contains an flag, that the handler object was created temporarily, it will be deleted as well. This is called from LMIConnection object, which holds an internal list of all subscribed indications by the LMIShell (if not created by hand).
https://openlmi.readthedocs.io/en/latest/openlmi-tools/api/shell/LMISubscription.html
2020-03-28T21:04:25
CC-MAIN-2020-16
1585370493120.15
[]
openlmi.readthedocs.io
How-to articles, tricks, and solutions about ANGULARJS AngularJs '{{' and '}}' symbols conflict with Twig IF you use Symfony as PHP framework and AngularJs as Javascript framework, you will have problems with print function, if you use Twigs AngularJs Blocks Form Submit without "action" AngularJs Blocks Form Submit without "action"-In general, AngularJs blocks the form submit, because of empty "Action".. AngularJs Modules Good architecture AngularJS is a structural framework for dynamic web applications. Read and get solution to how to make an architecture for AngularJs Modules..
https://www.w3docs.com/snippets-tags/angularjs
2020-03-28T21:10:18
CC-MAIN-2020-16
1585370493120.15
[]
www.w3docs.com
Updates v3.3.0 – v3.21.0 Find out what features has been add and improve. Update v3.21.0 20 August 2019 Payment Method – Added Strong Customer Authentication (SCA) support in Stripe payment method. Fixed Price improvements – Added option to search fixed prices by location name. – ALL to ALL locations setting in fixed prices is no longer possible. – Fixed an issue where some Fixed Prices postcodes could not be loaded due to clients server settings. – Fixed prices help popover will now be automatically closed when clicked outside of it. – Added column visibility, reset and reload buttons in fixed prices listing tab. – Added “Z” symbol for “Zone” in fixed prices listing tab for easier navigation. Symbol “Z” in Fixed prices means the fixed price is using Zone based location. Customer Account – Added new functionality for Customer Account. When user filled in the address in the account, this address will be prefilled as default Pickup address in web booking. Booking Listing – Added booking reference tags {pickupDateTime}, {pickupDate}, {pickupTime} – In Booking listing we have made visible Number of passengers in each booking. This aims to speed up use by being able to see it without clicking View. – In Booking listing for pickup, dropoff and via we have added text wrapping. This aims to make cleaner interface. – Added passenger amount column in booking listing tab. – Addresses in booking listing tab will be automatically clip if the address is too long. Booking Status – Added setting Do not allow driver to cancel the job allowing to hide status Cancelled for the driver. This will in effect not allow driver to cancel any jobs. This can be set in Settings -> Users -> Driver – To avoid setup mistakes we made below three setting, that only one at a time can be set. – Allow driver to change status to “Driver Cancelled” and send an email notification to the admin – Allow driver to change status to “Cancelled” and automatically send an email notification to the customer – Do not allow driver to cancel the job – Added setting allowing to hide status On route, Arrived, On Board. Go to Settings -> User -> Driver. – We add functionality to change booking status color. This way each company can set up its own color theme. Web Booking – Added setting allowing to hide web booking fields: Via, Swap location, Return. This can be set in Settings -> Web Booking -> Step 1 – Added setting allowing to hide web booking fields: Book for someone else, Passengers (dropdown), More Options (tick box), Date, Vehicle type, Price journey summary, message under comment box. This can be set in Settings -> Web Booking -> Step 3 Sending job reminders – Added basic support for cron tasks e.g. sending job reminders. – Cron settings are now stored in the database. – Added password generator to cron job security key. Other fixes and improvements – Fixed an issue in dispatch booking form where vehicle types weren’t displaying in correct order. – Added timezone option in user profile. – Fixed problem with unpaid booking being display as unconfirmed instead of incomplete. Now the booking will show as incomplete until it is paid. When paid the status will be changed to unconfirmed. – Added “pickupDateTime”, “pickupDate”, “pickupTime”, “pickupDateTimeFormatted”, “pickupDateFormatted” and “pickupTimeFormatted” parameter to booking ref number format. – In Dispatch map functionality has been change. Now it will display location that has been saved in database instead of default one. – Fixed issue with booking search in driver listing tab, during search an error was coming up. – Fixed issue – push notifications will no longer show HTML code. – Corrected vehicle type image display in booking form. Now the system will scale up user uploaded images proportionally. – Added scrollbar in sidebar menu as indication to the user that he can scroll down. – Fixed problem with white screen coming up in dispatch tab when clicking on the driver that is offline. – Fixed problem with applying “Restricted Areas” when two different vehicle types, date and time overlap. – Notification errors (caused by incorrect SMS, Email setup) will now be saved to system log file instead of blocking the code. – Updated PHP libraries. – Software performance improvements by limiting database queries and code optimisation. – Fixed issue with duplicate steps in booking form when using Google Translator. – Added option in settings “Show edit profile button in driver account”. – Added option in settings “Allow driver to edit insurance number”. – Added option in settings “Allow driver to edit driving license number”. – Added option in settings “Allow driver to edit PCO license number”. – Added option in settings “Allow driver to edit PHV license number”. – Improved booking reference number help section, added explanation usage of each tag. Functionality explained 1 August 2019 In this update we want to explain few modules which has been shaped over past few months. Hopefully this will explain a bit better its final forms: New Booking Visibility After receiving valuable feedback from our customers we put some effort to improve visibility of incoming new bookings to the system. Here is what we did: - System perform a check of booking listing. If any booking has been made, it will automatically appear on the listing. - All new bookings are highlighted so for ease spotting and marked with a symbol “New” (in “Info” column) - Admin can change visibility status to “Marked as read / Marked as unread”, making highlighting and symbol “New” will appear/disappear. - Edited booking will be automatically set to “Marked as read” and highlighting and symbol “New” will disappear. Note: Default refresh time is set to 60 sec, it can be lower down to 30 sec (go to Settings -> Booking -> option Admin booking listing). Lowering it might affect system performance, so test and see what is best for you. Update v3.20.0 June 2019 Fixes & Improvements – Added support for PHP 7.0 in Redsys payment method – Fixed issue with option “Add passenger” on mobile device – Fixed issue with scrolling in dispatch booking form – Fixed issue with PostcodeAnywhere location location suggestions in admin booking form – Fixed issue where for Hourly Service price per hour wasn’t included before minimum price check – If route distance is equal to zero the system will not apply any mileage factors – Web Booking and Customer Account widgets performance optimisation – Added translation for footer message in emails sent to customer – Removed message “Not available” from Web Booking, step 2 (vehicle type listing) when enquiry button is enabled. – Added functionality to allow switch On/Off “Allow guest bookings”. This functionality can be founded Settings -> Web Booking -> Step 3 tab -> Allow guest booking – In Web Booking (step 2), the vehicle type icons has been reduced slightly in size – Fixed issues with opening admin booking form options on mobile – Fixed issue with closing calendar popup in driver app – Added search option in calendar tab allowing to narrow down the search to specific keyword and date&time – Added new “Unassigned” option to booking source list – Added option to upload files to feedback – Allowed new file extensions for feedback and user attachments upload – Fixed issue with positioning phone number country code popup on mobile – In Web Booking (STEP 2),the vehicle type list will now longer scroll to map box on page load – Vehicle list will now longer scroll to map box on page load. – Driver journey is now calculated to the closest pickup or dropoff location. – Added “Worldpay Online Payments” payment integration. – Fixed pagination in fixed prices tab, it did not divide results in pages. – Fixed problem with passing waypoint location data in URL. – Some security improvements Operating Area We have implemented a change in Operating Area functionality What to do when journey is not in operating area? -> Allow booking and add driver journey to total price The previous choice of options Base-Pickup-Dropoff, Pickup-Dropoff-Base and Base-Pickup-Dropoff-Base were calculating very different prices for Outbound and Return journeys. From version 3.20.0, driver journey is calculated from the Operating Area Address to the nearest point of journey, either Pickup or Drop-off, thus calculating same amount for Outbound and Return journeys. Note: In case of returning journey which has a different Pickup and Dropoff from Outbound journey, driver journey price will be different from Outbound as Pickup/Dropoff nearest location will also be different. This option is located in Settings -> Operating Area Zone system The Zone system has been tested and we believe it is working correctly, therefore we are taking it out of Beta version. Now we are working on expanding its use for Location, Meet & Greet, Meeting Points, Parking and Congestion charges. Spacial custom field in Dispatch A new custom field has been added to dispatch booking form and booking listing. This aims to allow any company to use it for its specific needs. This is available in Dispatch booking form settings. Update v3.19.0 20 June 2019 Dispatch module changes: – Changed default driver list order in booking form to be displayed by Unique ID. – Payment method field is now closed by default in booking form – Unconfirmed bookings are now displayed in dispatch Next 24 tab – All drivers with status unavailable will disappear from the map after 1h of inactivity. Other changes and fixes: – When the journey price is zero the vehicle type will not be shown on the vehicle list in the Web Booking form – Added a new column “Opened” to the booking listing tab, enabling booking sorting by read/unread parameter – Fixed issue with date and time picker being cut off in the Discount and Fixed Prices tabs – In Dispatch “Return Journey” button has been moved from the bottom of the booking form to the top of it, next to Location title for easier navigation – Some Translation corrections – Payment method field in the booking form, is now closed by default Update v3.18.0 May 2019 – Add functionality to send notification to customer and the driver whenever booking has been edited. The system will send it if any of the following parameters has been changed: pickup, dropoff, time, date, account, price, discount, parking, meet and greet, waypoints, vehicle type, additional items, driver, passengers amount,customer requirements, booking status, customer and lead passenger. The notification is sent depending on booking status. Added support for PHP v7.2 and v7.3 – Admin booking form improvements – some layout improvements and change of functionality of some of the fields that values can be edited directly from it instead of popups – Added “Zone ” functionality – Setting has been added to Dispatch map in order to make the map cleaner – Software Setup improvement – add functionality of database testing and password generation in order to make the installation easier and faster – Payment link sent from admin panel will now be sent in preferred customer language – HTML tags will be automatically removed from all SMS messages now – In Dispatch, booking form setting has been added to give better control over the booking form – Added option to mark booking as read/unread in booking listing tab. All unread booking are now highlighted – Added new tab “Next 24h” in booking & dispatch panel and it displays all booking which needs to be done within 24h – Added option to Settings -> Integration -> Mail settings for sending test emails – Added help info to Settings -> Google -> Autocomplete – Force selection – Refresh frequency reduce in Dispatch for driver visibility on the map to improve system performance – Refresh frequency reduce in Booking listings for incoming new booking to improve system performance – Added “sendmail_path” server configuration detection – Some Translation corrections Driver Assign option improvement We have listen to your requests to automatise calculation when assigning drivers to a booking. Here is what has changed: - Driver income is automatically calculated based on Driver Income %. Driver Income can be override during the assignment process. - Passenger Charge is automatically calculated based on Transaction in booking payment tab. Driver App iOS Update v1.4 May 2019 – Number of fixes for iOS system – scroll below to read Major Bug Fixes v3.17.0 Update v3.17.0 April 2019 Add New Booking - Add New Booking module in the Dispatch has been redesign to be more intuitive and easy to use - Improved Price field: If value is 0, upon clicking the value automatically disappear. If value is more than 0, it automatically selects the price for ease overriding - Mobile version – popups now disappear correctly - “Cash to Take” setting has been renamed to “Passenger Charge” - Journey time dropdown choice has been shorten it to every 5 minutes for faster use. - A setting tab has been added to allow more flexibility Setting tab A bunch of new setting have been created - Advanced settings (open/close) - Passenger, Suitcase, carry-on (enable/disable) - Waiting time after landing (enable/disable) - Show unavailable drivers (enable/disable) Show Unavailable Drivers – when enable, all driver with status Unavailable (not at work at the time) are display in below the dispatcher map, marked in grey. This option can be very useful when you assign driver to future jobs. Edit Booking - Automatic notification: After editing a booking if any of the following parameters has been changed: address, time, date, account, price, an automating notification is send to the customer and the driver. Major Bug Fixes 1. Tracking been switch off when other apps have being used in foreground mode. 2. Driver app getting stuck when switch to google maps via navigation button. Both issues has been resolved but during the testing process we have found there is another issue which is related to Android 7+ version where system switch off tracking within few minutes when used in background mode. This issue is related to React Native Expo technology EasyTaxiOffice app uses which appears not to be compatible with latest Android update. We are not developers of this technology, so at the moment we are awaiting support from React Native team. We have implement temporary solution for Android 7+ version users. We have created a setting Location Updates (available in top-right corner of the app). Driver who lose tracking while using driver app in background mode, we advice to switch to Foreground mode. Switch off and on the app to have your app updated. We expect this update solves both issues but we would love to hear your feedback, especially if this solution doesn’t work. 3. Driver could proceed with the job while not appearing on dispatcher map. Driver status system has been revamped. New system will require driver to change status to Available and agree to being track before being able change booking status and start the job. Driver status system consist of: Available: – Driver can change booking status to On Route and start the job only when switch to Available status. – Changing to status Available requires the driver to accept being tracked and be visible on dispatcher map. – When status is change to Available, driver will be track and will appear on dispatcher map marked with color Green (available – not on the job) or Red (unavailable – currently on the job). Status On Break: – On Break status allows driver to inform the company he/she is On Break but still display their location. – Changing to status On Break requires the driver to accept being tracked and be visible on dispatcher map. – On the map his status is marked in Blue. Status Unavailable: – Changing to this status, disable tracking and driver is no longer visible on the dispatcher map. – Driver can change booking status to Cancel at any time. Update v3.16.0 April 2019 Notifications changes Some of our client have asked us to expand functionality of sending driver and vehicle details to customer. For this reason we have made some adjustment to notification system, here is what has change: Admin can set in Notification tab to send driver and vehicle for fallowing status: - Assigned – when admin assign the job to driver - Accepted – when driver accept the job - On Route – when driver change status to On Route Fixes and improvements – Dispatch booking listing has been modified and now only one listing tab is loaded at the time (before was all tabs). This leads to less data exchange with the server, shortening page loading time – Error: SMS not working correctly – this issue was caused when more than one connection has been activated in same time. We fixed this error by adding a switch “SMS service” which allows only one system to be activated at the time – In driver account/app: In Progress tab has been moved to the top – Added option to remove whitespaces from company settings in config tab – Added option to load timezone form database – Fixed problem with auto booking confirm option quote – Improved translations – Notifications translations improvements – Dutch translation improvements – Updated driver app name and links – New setting has been created Settings -> User -> Drivers -> Attach booking details to driver assign notification email and Attach booking details to driver assign notification sms. This functionality allows sending full booking information to driver’s via email and sms – Forced SSL connection for TextLocal api connection – Improved driver assign option in booking listing tab. Admin can now set commission, cash, notification fields in one go – Hourly service will no longer show enquire button – Added “GP webpay” payment method – Disabled browser autocomplete option in booking form and payment edit page – Improved phone international country code detection – Removed disabled title in passenger name in frontend booking form – Fixed problem with duplicating booking entries in frontend booking form – Improved cache system in booking quote – Improved invoice printing option – Improved preferred booking language functionality Update v3.11.3 March 2019 – Updated translations for config tab – Fixed problem with Restricted Area when default vehicle is selected – Update Czech translations – Now admin can search bookings by source details February 2019 update v3.11.0-2 – Added new booking form in admin panel – Implemented Google API check – user will see an error if Google API error occurred – Added option “Min number of characters for location suggestions search” in admin tab thus minimising usage of Google services and reduce charge for it – Added option to enable/disable members benefits – Displayed source and preferred notification language in booking detail tab – Added Google Services cache to speed up quotation process – Improved debugging tools – Added Czech and Dutch translations – Booking form customer and driver small improvements – Add status colour highlight in booking listing tab. January 2019 update v3.10.2-3 – Payment method will now be displayed in modal title – Added Stripe iDEAL payment method integration – Added help section to location tab – Booking confirmation and thank you page translations and layout improvements – Improved booking export option, no services will be added to the export list if service option is deactivated – Added scroll bar CSS styling – Language switcher will be now hidden until the booking form is fully loaded – Added “Book by phone” option in booking form – Added option to show/hide return journey, add via, swap address, book now buttons in booking form – Fixed problem with removing search filters in booking listing tab – Added option to choose if the booking will be confirm or unconfirmed after it is created by a customer – Fixed bug with copying phone number to booking form for logged in customer – Changes to notification system – admin will not receive notification when in edit mode. Reason for it is that admin is doing the change so he there is no need to inform him of it – New bookings are now highlight in booking listing tab in admin Pending status has been changed to Confirmed. All new bookings are set as Confirmed. Reason for it that after update to New Booking Visibility, pending status was not needed anymore. December update v3.10.1 – Web Booking form layout & styling improvements. November update v3.9.3 – v3.10.0 – Added option to choose how to display service dropdown or tabs. – Added to localization settings section option to control how the names are displayed, full names or language codes. – Services can be display as tabs. – Set dispatch as default page after login if dispatch module is activated. – Hungarian translation corrections. – Added booking date to driver details SMS. – Corrected phone country code box positioning on mobile. – Changed quote function remote connection cURL to GuzzleHttp. – Improved modal window displaying on iPhone. – Added “Portuguese (Brazil)” language. – Updated web widget integration page. – Updated “Polish” language translation. – Improved location search option. – Improved booking form user experience. – Added option to enter member benefits. – Added option to enable/disable mini web booking widget header. – Improved help section in discount tab. – Improved help section in payment method tab. – Added option to hide vehicle without price. – Added option to display vehicle type in calendar. – Added option to send booking details via SMS to driver. – Updated iframeResizer plugin. – Added option to choose how the driver journey will be calculated – Added swap location button to mini booking widget and booking form. – Added placeholder text “Optional” to via fields in booking. – Fixed problem with PDF library when using PHP 7.1. – Added get current location button to booking form. October update v3.9.0-2 – New Dispatcher tab has been added allowing admin to use three module in one tab: Add new booking, see driver on map and see latest booking list allowing fast and easier dispatching. – Improved login and register page style. – Improved multi-site functionality in app loader function. – Fixed problem with displaying correct feedback link in customer account. – Added dispatcher tab. – RingCentral changed app key. – Improved quote location geocoding function. – Moved pricing to settings menu. – Added web widget integration page to settings menu. – Improved in quote module driver journey route finding (shortest). – Changed “_parent” to “_top” target link to avoid problems with popup permissions when iframe loaded in another iframe. – Improved “Airport detection” module in backend. September update v3.8.0-6 – Fixed bug with wrong formatting in pdf library when running php 7.1. – Vehicles module has been moved to settings. – Added new option “Fastest (Without traffic)” to route type in Google settings. – Updated iframeResizer plugin. – Add new booking by admin has been updated – from this version, all booking add in dashboard will have “Source” marked as Admin, giving indication it has been add by Admin. The source can be change while New booking is add. – PayPal transaction notification improvements. – Added operating area check to scheduled service. v3.7.5-6 – Fixed problem with sending duplicate notifications when transaction status is set to paid and the link session has expired. – Improved notification messages. August update v3.7.0-4 – Changed default value for “incomplete_bookings_delete_after” option to 72h – Updated “date range picker” plugin to the latest version – Improved image resize functionality – Added help popup to Fixed Prices tab – Updated notification translations – Added “optional” message to full address and comments fields in booking form. – Added email verification in booking form in admin panel. – Added min booking time countdown in booking form. – Added help info to vehicle type image upload option. v3.6.2-4 Caller ID has been added to the system What it means is that we have connected software to RingCentral telephony system. When company opens an account with RingCentral telephone and connect to software, admin will be able to see Caller details if the phone number has been used previously. Admin will be also able to see details of the previous Caller booking helping to quickly proceed with finding information and making new booking. Please check screenshots in this tutorial. Other updates – Fixed problem with min booking time in scheduled service. – Added help info in payment method page. – Added option to trim white spaces from API key fields during saving. – Fixed issue with onload scrolling in customer account. – Added no transaction error message in payment page. – Fixed problem with direction route finding. The problem occurred when the date was in the past (old bookings). – Added option to hide service dropdown menu if there is only one option available. – Improved Google Analytics and Adwords tracking. July update v3.6.0-1 – Added dropdown arrow icon support for iOS devices. – Booking from and customer account layout improvements. – Added option to enable/disable postcode check in quote module. – Added service type restriction to payment methods. – Hide services dropdown menu if there is only one option available. – Disabled minimum date restriction in booking edit form. – Disabled double quote check when date of the booking has changed. – Removed “no location” error message from booking edit form. – Improved scheduled service time selection on iOS. v3.5.2-11 – Fixed problem with airport detection in quote module. – Improved info popups in config tab. – Improved customer registration form. – Improved translations. – Improved user account functionality. – Improved booking form functionality. – Added international phone number support in booking form. – Fixed problem with night surcharge range overlapping, double price. – Fixed problem with distance “Add” factor when via and operating address is present. – Added “no_https_redirect” option which fixes problem with WorldPay payment response. – Price quote improvements. – Fixed problem with accuracy out of range in users database table. – Admin booking edit tab improvements. – Fixed bug with min booking time in booking form. – Fixed bug in price quote. Removed double calculations when applying vehicle factor in base price when via address has been entered. – Improved Scheduled Route functionality. – Driver booking tab improvements. – Updated translations. – Renamed “Language” section to “Localization” in config tab and moved some option in from other sections. – Added few new options to date and time format field in config tab. – Improved night surcharge functionality. Now more options can be added. June update v3.5.0-1 – Updated translations in Types of Vehicles tab. – Fixed problem with Stripe payment redirect which caused sending a duplicate booking confirmation email. – Added warning message for meeting board “download” and “print” buttons in driver app. – Added feedback functionality (Comments, Lost & Found, Complaints). v3.4.32-34 – Added option to enable iFrame scroll to top functionality and position offset. – Improved delete record message in vehicle type tab. – Added option to calculate return journey price the same way as one way to avoid slightly different price calculated google maps based on slightly different way back distance. – Create functionality so email is send when booking has been cancel. – New functionality has been created to allow send driver and vehicle details to customer upon driver assignment. – New functionality has been created to allow admin and driver print and download Meeting Board. – Cancellation functionality has been expanded. Now admin can choose if to allow driver cancel a job and send automatic email to the customer informing of job cancelation or only cancel the job and admin will contact customer in regards to it (new status has been add to “Driver cancelled”. – Added driver app trial mode allowing all new customer to test Driver App free of charge for first 30 days. – Scheduled route feature has been added to system (available only at special request) May update v3.4.30-31 – A new panel ‘Services’ has been created to allow setting different service type influencing Web Booking calculation – Fixed bug with Squareup payment redirection when in iframe. – Fixed bug with Squareup payment notification status update. – Notification system improvement – notification with Driver and Vehicle details send to customer can be automatically set to be sent with status Job Accepted or On Route, – Terms and Condition page has been add to the system – no need to use external T&C anymore. now it can be set to automatically be added as PDF to Booking Confirmation email and display in third step of the Web Booking, – some translations updates. – Map improvements in admin panel. v3.4.22-29 – Improved booking ref number generation function. – Added SMS Gateway API support. It allows to send SMS directly from mobile (beta version) – Added option to enable or disable extra geocode location finder function in quote file. – Added option to send driver details to customer when driver is assigned. – Moved “timestamp” site config option to app config. – Added option to switch between internal or external Terms and Conditions. – Masked Stripe keys in edit payment type. – Fixed problem with map scaling in admin panel. – Simplified user and vehicle menu in admin tab. – Fixed bug with calculating total price in admin (division by zero). – Added preferred notification language in booking edit tab. – Some Improvements to booking and calendar functionality. – Some Improvements to admin and driver interface. – Added option to set user’s default language. – Few bug fixes. – Now all bookings email are send in a default customer profile language. – Improved return url option in edit booking tab. – Calendar now remembers current state. – Added more booking details to confirmation email. – Added few option to control booking details visibility in admin calendar. – Added additional info option to BACS payment method. – Added “minutes after landing” text to driver job detail page. April update – Customer App is available – Added Customer push notifications – Added Customer app support – Fixed bug with payment testing value – Added auto redirect to previous page after editing the booking – Booking listing will now expand its height depending on page size and not the content as it was before – Moved “Driver App” menu to “Settings” menu – Fixed bug with modal jumping to bottom of the page on iOS device – Fixed bug with Google Maps in admin – Added auto refresh counts in driver dashboard – Added Squareup payment method – Some bug fixes – Some layout improvements – Hungarian translations updated March update – Operating Area has been expanded. Now you have additional choice how it influence Price of any journey that neither start or finish in Operating Area. Now you can set: a). Booking is not allowed – This will not allow booking to be made. A message will be display that company is not operating within this area. b). Allow submitting booking without price – It allows booking to be made but pricing is not displayed. A message appear “Please finish booking reservation without price and we will provide you with a quote”. Prices are not displayed but booking can be made as normal with Reserve button at the end instead of Case or Credit Card payment. c). Allow booking and add driver journey to total price – It allows booking to be made but adds cost of driver journey to the nearest point (Pickup/Dropoff). – Show Company Name next to the name – Simplified search From and To fields in booking add new and edit tabs – Enable changing booking status directly from booking listing tab – Improved search system in booking edit tab in admin – Notification system has been improved – now you can control what notification will be send to Customer, Driver and Admin and how they will be send – via Email, SMS or App (depending on availability) – Added “Support” link in admin menu, so you can easily access documentation “How to use” – Improved mail option, added sendmail – Fixed some bug in parking charge – Added vehicle min price option in config tab – now you can set individual price for each vehicle type – Improved invoice file name format – Displayed price summary in booking detail page and email – In add new and edit booking we enabled autocomplete data when assigning customer February update – Displayed price summary in booking detail page and email. – Autocomplete data when assigning customer. – Added “New” symbol to booking listing as indication of new booking creation. – Added driver unique_id to driver list in filter and sorted it by this value. – After creating new booking user will be redirected to Latest page. – Added driver and status notes in the booking. – Added “Show/Hide” functionality to some fields in config tab. – Added logo upload option in admin. – Added view invoice option in listing. – Improved duplicate booking option. – Added new status “arrived”. – Displayed driver details in booking details page. – Improved language switcher. – Added “Italian” and “Hungarian” translations. – Lots of interface improvements. January update – Added user online status in avatar in users tab. – Improved UI in vehicle type listing tab. – Improve responsiveness of listing tabs. – Added option to clear state (cookies) in listings. – Added new interface to enter parking charge. – Added option to create custom dropdown menus in additional charges tab. – Fixed problem with sending booking details to admin. – Fixed price deposit. – Added waiting time charge option. – Added icon and colour picker option to location category tab. – Simplified user interface in driver and admin panels. – Added shortcut status links for booking listing in admin. – Improved filter search in booking listing tab. December update – Added vehicle and payment image upload option. – Added new option to operating area tab. – Improved auto refresh booking listing option. – Added night and holiday charge factor type in config tab. – Added payment method and payment status filters in booking tab in admin. – Added show traffic and labels option in map tab. – Added duplicate booking option in admin. – Added Payzone payment option. – Added “BACS” payment method. – Displayed all statuses in booking filter status dropdown menu. – Improved navigate link in driver account. – Improved calendar in admin. – Improved date format, admin can now choose format type. – Adde Show “Unique ID” next to driver name. – Added few options in config tab to control what driver can see in his account. – Hide driver income for “onroute” and “onboard” statuses so that customer can not see how much driver earns. – Bug fixes.
https://docs.easytaxioffice.com/updates/updates-v3-3-0-v3-21-0/
2020-03-28T20:10:08
CC-MAIN-2020-16
1585370493120.15
[]
docs.easytaxioffice.com
Making the diagram visible for others to see... From: One of the questions we still get is: "What is your story for sharing diagram? If I do not have the Visual Studio Team Edition for Software Architects version of Visual Studio how do I view the diagram?" We would have liked to have a read-only view (or something similar) of the diagrams for the other versions of Visual Studio. However, this was not possible in this version. There are workarounds. I have listed these below: The deployment report is probably your best option if you want to distribute visualizations of the architecture of the solution, the deployment definition and logical datacenter being deployed to. Generating the deployment report results in the creation of set of images that correspond to the various diagrams. The deployment report's user readable form (html) includes these images. We invision that architects will generate a deployment report and share it out with the rest of their teams. For more information on the deployment report, see . In the case where a logical datacenter diagram or a deployment diagram is not available, the user should use the 'Copy Image' option in the 'Edit' menu and save the images to the solution. The image can then be checked into a SCC Server such as VSS and then can be viewed by the rest of the team who may or may not have Visual Studio available.
https://docs.microsoft.com/en-us/archive/blogs/a_pasha/making-the-diagram-visible-for-others-to-see
2020-03-28T22:29:07
CC-MAIN-2020-16
1585370493120.15
[]
docs.microsoft.com
Creating nodes locally Local nodes are used for testing and demo purposes only. There are two ways you can create a node locally: - Manually: create a local directory, add the relevant node and CorDapp files, and configure them. - Automatically: use the Cordform or Dockerform gradle plug-ins, which automatically generate and configure a local set of nodes. Create a local node manually To create a local node manually, make a new directory and add the following files and sub-directories: - The Corda .jarartifact file, downloaded from - under ../4.6/corda-4.6.jar. - A node configuration file with a name node.conf, configured as described in the Node configuration section. - A sub-directory with a name cordapps, containing any CorDapp .jarfiles you want the node to load. - An up-to-date version of the network-parametersfile (see The network map), generated by the bootstrapper tool. The remaining node files and directories will be generated at runtime. These are described in the Node folder structure section. Run the database migration script if upgrading - Remove any transactionIsolationLevel, initialiseSchema, or initialiseAppSchemaentries from the database section of your configuration. - Start the node with run-migration-scriptssub-command with --core-schemas. See Upgrading your node to Corda 4.6 for more information. Step 7. Start the node in the normal way Start the node in the normal way. Use Cordform and Dockerform to create a set of local nodes automatically Corda provides two gradle plug-ins called Cordform and Dockerform. They both allow you to run tasks that automatically generate and configure a local set of nodes for testing and demonstration purposes. - A Cordformtask creates nodes in the build/nodesdirectory. The example Cordformtask used in this document creates three nodes: Notary, PartyA, and PartyB, however you are free to spin up more nodes, specify what nodes you need on the network, change node names, and update node configurations. - Nodes deployed via Dockerformuse Docker containers. A Dockerformtask is similar to Cordformbut it provides an extra file that enables you to easily spin up nodes using docker-compose. This creates a docker-composefile that enables you to run a single command to control the deployment of Corda nodes and databases (instead of deploying each node/database manually). Specific requirements Cordformtasks require you to deploy each Corda node and database separately. Dockerformtasks require Docker to be installed on the local host. Tasks using the Cordform plug-in Run this example task to create the following three nodes in the build/nodes directory: A Notary node, which: - Provides a validating Notary service. - Runs the corda-financeCorDapp. PartyA and PartyB nodes, each of which: - Does not provide any services. - Runs the corda-financeCorDapp. - Has an RPC (Remote Procedure Call) user ( user1), which enables you to log in the node via RPC. All three nodes also include any CorDapps defined in the project’s source directories, even if these CorDapps are not listed in each node’s cordapps setting. As a result, if you run the deployNodes task from the template CorDapp, for example, it will automatically build and add the template CorDapp to each node. Cordformallows you specify any number of nodes and you can define their configurations and names as needed. The following example, as defined in the Kotlin CorDapp Template, shows a Cordform task called deployNodes that creates the three nodes described above: Notary, PartyA, and PartyB. } h2Port 10012 cordapps = ["$corda_release_distribution:corda-finance:$corda_release_version"] // Grants user1 the ability to start the MyFlow flow. rpcUsers = [[ user: "user1", "password": "test", "permissions": ["StartFlow.net.corda.flows.MyFlow"]]] } } The configuration values used in the example are described below. Required configuration name<string> - use this configuration option to specify the legal identity name of the Corda node. For more information, see myLegalName. For example: name "O=PartyA,L=London,C=GB" p2pAddress<string> - use this configuration option to specify the address/port the node uses for inbound communication from other nodes. For more information, see p2pAddress. Required if p2pPortis not specified. For example: p2pAddress "example.com:10002" p2pPort<integer> - use this configuration option to specify the port the node uses for inbound communication from other nodes. The assumed IP address is localhost. For more information, see p2pAddress. For example: p2pPort 10006 // "localhost:10006" rpcSettings<config> - use this configuration option to specify RPC settings for the node. For more information, see rpcSettings. For example: rpcSettings { port 10006 adminPort 10026 } Optional configuration notary<config> - use this configuration option to specify the node as a Notary node. Required> for Notary nodes. For more information, see Notary. devMode<boolean> - use this configuration option to enable development mode when you set its value to true. For more information, see devMode. For example: devMode true rpcUsers<list> - use this configuration option to set the RPC users for the node. For more information, see rpcUsers. You can use arbitrary values in this configuration block - “incorrect” settings will not cause a DSL error. An example follows below: rpcUsers = [[ user: "user1", "password": "test", "permissions": ["StartFlow.net.corda.flows.MyFlow"]]] configFile<string> - use this configuration option to generate an extended node configuration. For more information, see extended node configuration. For example: configFile = "samples/trader-demo/src/main/resources/node-b.conf" sshdPort<integer> - use this configuration option to specify the SSH port for the Docker container. This will be mapped to the same port on the host. If sshdPortis specified, then that port must be available on the host and not in use by some other service. If sshdPortis not specified, then a default value will be used for the SSH port on the container. Use the docker port <container_name>command to check which port has been allocated on the host for your container. For more information, see sshd. For example: sshd { port = 2222 } You can extend the deployNodes task with more node {} blocks to generate as many nodes as necessary for your application. To extend node configuration beyond the properties defined in the deployNodes task, use the configFile property with the file path (relative or absolute) set to an additional configuration file. This file should follow the standard Node configuration format of node.conf. The properties set there will be appended to the generated node configuration. deployNodestask, both properties will be present in generated node configuration. Alternatively, you can also add the path to the additional configuration file while running the gradle task via the -PconfigFile command-line option. However, this will result in the same configuration file being applied to all nodes. Following on from the previous example, the PartyB node in the next example below has additional configuration options added from a file called" } } The drivers Cordform parameter in the node entry lists paths of the files to be copied to the drivers sub-directory of the node. To copy the same file to all nodes, define ext.drivers in the top level, and reuse it for each node by setting drivers=ext.drivers. task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) { ext.drivers = ['lib/my_common_jar.jar'] [...] node { name "O=PartyB,L=New York,C=US" [...] drivers = ext.drivers + ['lib/my_specific_jar.jar'] } } Package namespace ownership To configure package namespace ownership, use the optional networkParameterOverrides and packageOwnership blocks, in a similar way to how the configuration file is used by the Network Bootstrapper tool. For example: task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) { [...] networkParameterOverrides { packageOwnership { "com.mypackagename" { keystore = "_teststore" keystorePassword = "MyStorePassword" keystoreAlias = "MyKeyAlias" } } } [...] } Sign CorDapp .jar files The default Cordform behaviour is to deploy CorDapp .jar files “as built”. - Prior to Corda 4.0, all CorDapp .jarfiles were unsigned. - As of Corda 4.0, CorDapp .jarfiles created by the gradle cordappplug-in are signed by a Corda development certificate by default. You can use the Cordform .jar files. Signing a CorDapp enables its contract classes to use signature constraints instead of other types of constraints, such as Contract Constraints. The signing task may use an external keystore, or create a new one. You can use the following parameters in the enabled- the control flag to enable the signing process. It is set to falseby default. Set to trueto enable signing. all- if set to true(default), all CorDapps inside the cordappsub-directory will be signed. If set to false, only the generated Cordapp will be signed. options- any relevant parameters of SignJar ANT task and GenKey ANT task. By default the .jarfile is signed by a Corda development key. You can specify the external keystore can be specified. The minimal list of required options is shown below. For other options, see SignJar task. keystore- the path to the keystore file. The default setting is cordadevcakeys.jks. The keystore is shipped with the plug-in. alias- the alias to sign under. The default value is cordaintermediateca. storepass- the keystore password. The default value is cordacadevpass. keypass- the private key password, if it is different from the keystore password. The default value is cordacadevkeypass. storetype- the keystore type. The default value is JKS. dname- the distinguished name for the entity. Only use this option when generateKeystoreis set to true(see below). keyalg- the method to use when generating a name-value pair. The default value is RSAbecause Corda does not support DSA. Only use this option when generateKeystoreis set to true(see below). generateKeystore- the flag to generate a keystore. The default value is false. If set to true, an “ad hoc” keystore is created and its key is used instead of the default Corda development key or any external key. The same optionsto specify an external keystore are used to define the newly created keystore. In addition, dnameand keyalgare required. Other options are described in GenKey task. If the existing keystore is already present, the task will reuse it. However if the file is inside the builddirectory, then it will be deleted when the gradle cleantask is run. The example below shows the minimal set of options .jar files are checked by signature constraints by default. You can force them to be checked by zone constraints by adding contract class names to the includeWhitelist entry - the list will generate an include_whitelist.txt file used internally by the Network Bootstrapper tool. Before you add includeWhitelist to the deployNodes task, see Contract Constraints to understand the implications of using different constraint types. The snippet below configures contracts classes from the Finance CorDapp to be verified using zone constraints instead of signature constraints: task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) { includeWhitelist = [ "net.corda.finance.contracts.asset.Cash", "net.corda.finance.contracts.asset.CommercialPaper" ] //... Optional migration step If you are migrating your database schema from an older Corda version to Corda 4.6, you must add the following parameter to the node section in the build.gradle and set it to true, as follows: runSchemaMigration = true This step runs the full schema migration process as the last step of the Cordform task, and leave the nodes ready to run. Run the Cordform task To create the nodes defined in the deployNodes task example above, run the following command in a command prompt or a terminal window, from the root of the project where the deployNodes task is defined: - Linux/macOS: ./gradlew deployNodes - Windows: gradlew.bat deployNodes This command creates the nodes in the build/nodes directory. A node directory is generated for each node defined in the deployNodes task, plus a runnodes shell script (or a batch file on Windows) to run all the nodes at once for testing and development purposes. If you make any changes to your CorDapp source or deployNodes task, you will need to re-run the task to see the changes take effect. Tasks using the Dockerform plug-in You need both Docker and docker-compose installed and enabled to use this method. Docker CE (Community Edition) is sufficient. Please refer to Docker CE documentation and Docker Compose documentation for installation instructions for all major operating systems. Dockerform supports the following configuration options for each node: name notary cordapps rpcUsers useTestClock You do not need to specify the node ports because every node has a separate container so no ports conflicts will occur. Every node will expose port 10003 for RPC connections. Docker will then map these to available ports on your host machine. You should interact with each node via its shell over SSH - see the node configuration options for more information. To enable the shell, you need to set the sshdPort number for each node in the gradle task - this is explained in the section run the Dockerform task further below. For example: node { name "O=PartyA,L=London,C=GB" p2pPort 10002 rpcSettings { address("localhost:10003") adminAddress("localhost:10023") } rpcUsers = [[user: "user1", "password": "test", "permissions": ["ALL"]]] sshdPort 2223 } sshdport number for a node, it will use the default value 2222. Please run the docker pscommand to check the allocated port on your host that maps to this port. The Docker image associated with each node can be configured in the Dockerform task. This will initialise every node in the Dockerform task with the specified Docker image. If you need nodes with different Docker images, you can edit the docker-compose.yml file with your preferred image. Before running any Corda Enterprise Docker images, you must accept the license agreement and indicate that you have done this by setting the environment variable ACCEPT_LICENSE to YES or Y on your machine. If you do not do this, none of the Docker containers will start. As an alternative, you can specify this parameter when running the docker-compose up command, for example: ACCEPT_LICENSE=Y docker-compose up Specify an external database You can configure Dockerform to use a standalone database to test with non-H2 databases. For example, to use PostgresSQL, you need to make the following changes to your Cordapp project: - Create a file called postgres.gradlein your Cordapp directory, and insert the following code block: ext { postgresql_version = '42.2.12' postgres_image_version = '11' dbUser = 'myuser' dbPassword = 'mypassword' dbSchema = 'myschema' dbName = 'mydb' dbPort = 5432 dbHostName = 'localhost' dbDockerfile = 'Postgres_Dockerfile' dbInit = 'Postgres_init.sh' dbDataVolume = [ hostPath : 'data', containerPath : '/var/lib/postgresql/data:\${SUFFIX}', containerPathArgs : [ SUFFIX : "rw" ] ] postgres = [ dataSourceProperties: [ dataSourceClassName: 'org.postgresql.ds.PGSimpleDataSource', dataSource: [ user : dbUser, password: dbPassword, url : "jdbc:postgresql://\${DBHOSTNAME}:\${DBPORT}/\${DBNAME}?currentSchema=\${DBSCHEMA}", urlArgs : [ DBHOSTNAME : dbHostName, DBPORT : dbPort, DBNAME : dbName, DBSCHEMA : dbSchema ] ] ], database: [ schema : dbSchema ], dockerConfig: [ dbDockerfile : dbDockerfile, dbDockerfileArgs: [ DBNAME : dbName, DBSCHEMA : dbSchema, DBUSER : dbUser, DBPASSWORD : dbPassword, DBPORT : dbPort ], dbUser : dbUser, dbPassword : dbPassword, dbSchema : dbSchema, dbName : dbName, dbPort : dbPort, dbHostName : dbHostName, dbDatabase : dbName, dbDataVolume : dbDataVolume ] ] } apply plugin: 'net.corda.plugins.cordformation' dependencies { cordaDriver "org.postgresql:postgresql:$postgresql_version" } def generateInitScripts = tasks.register('generateInitScripts') { Task task -> def initialDockerfile = file("$buildDir/$dbDockerfile") def initialScript = file( "$buildDir/$dbInit") task.inputs.properties(project['postgres']) task.outputs.files(initialDockerfile, initialScript) /* * Dockerfile to initialise the PostgreSQL database. */ task.doLast { initialDockerfile.withPrintWriter('UTF-8') { writer -> writer << """\ # Derive from postgres image FROM postgres:$postgres_image_version ARG DBNAME=$dbName ARG DBSCHEMA=$dbSchema ARG DBUSER=$dbUser ARG DBPASSWORD=$dbPassword ARG DBPORT=$dbPort ENV POSTGRES_DB=\$DBNAME ENV POSTGRES_DB_SCHEMA=\$DBSCHEMA ENV POSTGRES_USER=\$DBUSER ENV POSTGRES_PASSWORD=\$DBPASSWORD ENV PGPORT=\$DBPORT # Copy all postgres init file to the docker entrypoint COPY ./$dbInit /docker-entrypoint-initdb.d/$dbInit # Allow postgres user to run init script RUN chmod 0755 /docker-entrypoint-initdb.d/$dbInit """ } /** * Append the persistence configuration if persistence is required (i.e., persistence=true) */ if (project.hasProperty("dbDataVolume")) { initialDockerfile.withWriterAppend('UTF-8') { writer -> writer << """\ # Associate the volume with the host user USER 1000:1000 # Initialise environment variable with database directory ENV PGDATA=/var/lib/postgresql/data/pgdata """ } } /* * A UNIX script to generate the init.sql file that * PostgreSQL needs. This must use UNIX line endings, * even when generated on Windows. */ initialScript.withPrintWriter('UTF-8') { writer -> writer << """\ #!/usr/bin/env bash # Postgres database initialisation script when using Docker images dbUser=\${POSTGRES_USER:-"$dbUser"} dbPassword=\${POSTGRES_PASSWORD:-"$dbPassword"} dbSchema=\${POSTGRES_DB_SCHEMA:-"$dbSchema"} dbName=\${POSTGRES_DB:-"$dbName"} psql -v ON_ERROR_STOP=1 --username "\$dbUser" --dbname "\$dbName" <<-EOSQL CREATE SCHEMA \$dbSchema; GRANT USAGE, CREATE ON SCHEMA \$dbSchema TO \$dbUser; GRANT SELECT, INSERT, UPDATE, DELETE, REFERENCES ON ALL tables IN SCHEMA \$dbSchema TO \$dbUser; ALTER DEFAULT privileges IN SCHEMA \$dbSchema GRANT SELECT, INSERT, UPDATE, DELETE, REFERENCES ON tables TO \$dbUser; GRANT USAGE, SELECT ON ALL sequences IN SCHEMA \$dbSchema TO \$dbUser; ALTER DEFAULT privileges IN SCHEMA \$dbSchema GRANT USAGE, SELECT ON sequences TO \$dbUser; ALTER ROLE \$dbUser SET search_path = \$dbSchema; EOSQL """.replaceAll("\r\n", "\n") } initialScript.executable = true } } - In the build.gradlefile, add the gradle task generateInitScriptsto the dependsOnlist of the prepareDockerNodestask, add the dockerConfigelement, and initialise it with the postgresblock. An example is shown below: task prepareDockerNodes(type: net.corda.plugins.Dockerform, dependsOn: ['jar', 'generateInitScripts']) { [...] node { [...] } // The postgres block from the postgres.gradle file dockerConfig = postgres } The postgres.gradle file includes the following: - A gradle task called generateInitScriptsused to generate the Postgres Docker image files. - A set of variables used to initialise the Postgres Docker image. To set up the external database, you must place the following two files in the build directory: Postgres_Dockerfile- a wrapper for the base Postgres Docker image. Postgres_init.sh- a shell script to initialise the database. The Postgres_Dockerfile is referenced in the docker-compose.yml file and allows for a number of arguments for configuring the Docker image. You can use the following configuration parameters in the postgres.gradle file: To make the database files persistent across multiple docker-compose runs, you must set the dbDataVolume parameter. If this variable is commented out, the database files will be removed after every docker-compose run. Run the Dockerform task To run the Dockerform task, follow the steps below. Dockerformallows you specify any number of nodes and you can define their configurations and names as needed. - Open the build.gradlefile of your Cordapp project and add a new gradle task, as shown in the example() sshdPort 2222 } node { name "O=PartyA,L=London,C=GB" p2pPort 10002 rpcSettings { address("localhost:10003") adminAddress("localhost:10023") } rpcUsers = [[user: "user1", "password": "test", "permissions": ["ALL"]]] sshdPort 2223 } node { name "O=PartyB,L=New York,C=US" p2pPort 10002 rpcSettings { address("localhost:10003") adminAddress("localhost:10023") } rpcUsers = [[user: "user1", "password": "test", "permissions": ["ALL"]]] sshdPort 2224 } // This property needs to be outside the node {...} elements dockerImage = "corda/corda-zulu-java1.8-4.6" } 2222. - To create the nodes defined in the prepareDockerNodesgradle task added in the first step, run the following command in a command prompt or a terminal window, from the root of the project where the prepareDockerNodestask is defined: - Linux/macOS: ./gradlew prepareDockerNodes - Windows: gradlew.bat prepareDockerNodes This command creates the nodes in the build/nodes directory. A node directory is generated for each node defined in the prepareDockerNodes task. The task also creates a docker-compose.yml file in the build/nodes directory. External database configuration If you configure an external database, a Postgres_Dockerfile file and Postgres_init.sh file are also generated in the build directory. If you make any changes to your CorDapp source or prepareDockerNodes task, you will need to re-run the task to see the changes take effect. If the external database is not defined and configured properly, as described in specifying an external database, the files Postgres_Dockerfile and Postgres_init.sh will not be generated. In this case, each Corda node is associated with a Postgres database. Only one Corda node can connect to the same database. While there is no maximum number of nodes you can deploy with Dockerform, you are constrained by the maximum available resources on the machine running this task, as well as the overhead introduced by every Docker container that is started. All the started nodes run in the same Docker overlay network. The connection settings to the Postgres database are provided to each node through the postgres.gradle file. The Postgres JDBC driver is provided via Maven as part of the cordaDrive gradle configuration, which is also specified in the dependencies block of the postgres.gradle file. Note that this feature is not designed for users to access the database via elevated or admin rights - you must only use such configuration changes for testing/development purposes.
https://docs.corda.net/docs/corda-enterprise/4.6/node/deploy/generating-a-node.html
2021-04-10T14:50:10
CC-MAIN-2021-17
1618038057142.4
[]
docs.corda.net
Encryption - Internal Certificate Authority - PGP Key Servers - Encryption Settings - Internal Recipients Encryption - External Recipients Encryption Internal PG. Encryption Settings - The Trigger encryption by e-mail subject allows Internal Recipients to encrypt email to any External Recipient by entering a special keyword in the subject of any email. This setting enables or disables this feature. We recommend you set it to Enabled (Figure 1). Figure 1 - The Encryption by e-mail subject keyword sets the special keyword to be entered in the subject of an email in order to encrypt that email message. Enter a unique keyword that would not normally appear in the subject of a typical email. We recommend you set this field to [encrypt] or [secure] ensuring to include the brackets (Figure 2). Figure 2 - The Remove e-mail subject keyword after encryption field sets the system to automatically remove the special keyword from the subject after the email has been encrypted. We recommend you set it to Enabled (Figure 3). Figure 3 - The Secure Portal Address field sets the address that will be included in PDF encrypted emails that require the recipient to navigate in order to decrypt, view and reply to encrypted PDF emails (Figure 4). Figure 4 - The PDF Reply Sender E-mail sets the From address for when an external recipient replies to an encrypted PDF email from the Secure Portal (Figure 5). Figure 5 - The Server Secret Keyword, Client Secret Keyword and Mail Secret Keyword are used to protect external resources against tampering. For example if an external user replies to an encrypted PDF email, the Server Secret Keyword ensures that the user can only reply to to a message generated by this server. If you followed the Getting Started guide, you should had generated new Server Secret Keyword, Client Secret Keyword and Mail Secret Keyword. If not, ensure you generate one by clicking on the icon next to each which will automatically generate a keyword and enter it in each respective field (Figure 6). Figure 6 - Click on the Save Settings button to save your settings. Internal Recipients Encryption If Internal Recipients have not been added in your system under Gateway --> Internal Recipients, this page will not show a recipient listing. By default, When Internal Recipients are added into Hermes SEG, they are NOT configured with the ability to send encrypted email. Each Internal Recipient must be individually configured for the type of encryption you wish for them to use. On this page, a listing of only previously added Internal Recipients will appear. Note, that under the Encryption Status section the PDF and S/MIME and PGP columns are set to No. Additionally, under the S/MIME Cert(s) section, the certificateicons are disabled indicating that no PGP Keyrings are present (Figure 1). Figure 1
https://docs.deeztek.com/books/hermes-seg-administrator-guide/chapter/encryption/export/html
2021-04-10T14:38:33
CC-MAIN-2021-17
1618038057142.4
[]
docs.deeztek.com
About View Storage in Greenplum Database About View Storage in Greenplum Database A view is similar to a table, both are relations - that is "something with columns". All such objects are stored in the catalog table pg_class. These are the general differences: - A view has no data files (because it holds no data). - The value of pg_class.relkind for a view is v rather than r. - A view has an ON SELECT query rewrite rule called _RETURN. The rewrite rule contains the definition of the view and is stored in the ev_action column of the pg_rewrite catalog table. For more technical information about views, see the PostgreSQL documentation about Views and the Rule System. Also, a view definition is not stored as a string, but in the form of a query parse tree. Views are parsed when they are created, which has several consequences: - Object names are resolved during CREATE VIEW, so the current setting of search_path affects the view definition. - Objects are referred to by their internal immutable object ID rather than by their name. Consequently, renaming an object or column referenced in a view definition can be performed without dropping the view. - Greenplum Database can determine exactly which objects are used in the view definition, so it can add dependencies on them. Note that the way Greenplum Database handles views is quite different from the way Greenplum Database handles functions: function bodies are stored as strings and are not parsed when they are created. Consequently, Greenplum Database does not know on which objects a given function depends. Where View Dependency Information is Stored - pg_class - object information including tables and views. The relkind column describes the type of object. - pg_depend - object dependency information for database-specific (non-shared) objects. - pg_rewrite - rewrite rules for tables and views. - pg_attribute - information about table columns. - pg_namespace - information about schemas (namespaces). It is important to note that there is no direct dependency of a view on the objects it uses: the dependent object is actually the view's rewrite rule. That adds another layer of indirection to view dependency information.
https://docs.greenplum.org/6-13/admin_guide/ddl/ddl-view-storage.html
2021-04-10T14:41:29
CC-MAIN-2021-17
1618038057142.4
[]
docs.greenplum.org
I need to share groups I created on teams with my clients who are external users to the exchange server. However our IT team is concerned about security and do not want to allow me access to invite guests to the groups. Can you maybe explain the real risk in security and if it is a concern or not. Kind Regards
https://docs.microsoft.com/en-us/answers/questions/21258/allowing-external-users-in-teams-and-the-security.html
2021-04-10T16:07:21
CC-MAIN-2021-17
1618038057142.4
[]
docs.microsoft.com
ImportError: DLL Load Failed: The Specified Procedure Could Not Be Found¶ When using version 2014 on Windows, you may have problems running QuantumATK or starting calculations in the Job Manager inside QuantumATK, and it shows the following error: Traceback (most recent call last): File "", line 1, in File ".\zipdir\NL\__init__.py", line 5, in File ".\zipdir\NLEngine.py", line 36, in File ".\zipdir\NLEngine.py", line 18, in swig_import_helper ImportError: DLL load failed: The specified procedure could not be found. This is caused by having multiple versions of QuantumATK installed at the same time. That’s in principle possible, but exactly in the 2014 version (not 13.8, and not 2015 or later), QuantumATK resolves the DLL location by the PATH environment variable, which means that the 2014.x “bin” directory must come first in the path, before any other QuantumATK bin directory. Since the directory is added last to the path upon installation, the issue typically only occurs if you have 13.8 or an older 2014.x version installed. The easiest solution is therefore to uninstall all older versions, or at least remove them from your PATH, then all should work fine. If you want to keep 13.8 (you really should not keep 2014.2 if you install 2014.3), you must ensure the 2014 version “bin” directory comes before the 13.8 bin directory in the PATH. If you have access to 2015 or later, you might as well uninstall both 13.8 and 2014 and just use the newer version! For instructions on editing the PATH on Windows, see e.g. here.
https://docs.quantumatk.com/faq/faq_installation_dllerror.html
2021-04-10T14:54:39
CC-MAIN-2021-17
1618038057142.4
[]
docs.quantumatk.com
Set the Elastic DRS policy on a cluster to optimize for your workloads' needs. In a new SDDC, elastic DRS uses the Default Storage Scale-Out policy, adding hosts only when storage utilization exceeds the threshold of 75%. You can select a different policy if it provides better support for your workload VMs. For any policy, scale-out is triggered when a cluster reaches the high threshold for any resource. Scale-in is triggered only after all of the low thresholds have been reached. Note: For two-host SDDCs, only the Default Storage Scale-Out policy is available. The following policies are available: - Optimize for Best Performance - This policy adds hosts more quickly and removes hosts more slowly in order to avoid performance slowdowns as demand spikes. It has the following thresholds: - Optimize for Lowest Cost - This policy adds hosts more slowly and removes hosts more quickly in order to provide baseline performance while keeping host counts to a practical minimum. It has the following thresholds: - Optimize for Rapid Scale-Out - This policy adds multiple hosts at a time when needed for memory or CPU, and adds hosts incrementally when needed for storage. By default, hosts are added two at a time, but beginning with SDDC version 1.14 you can specify a larger increment if you need faster scaling for disaster recovery and similar use cases. When using this policy, scale-out time increases with the number of hosts added and, when the increment is large (12 hosts), can take up to 40 minutes in some configurations. You must manually remove these hosts when they are no longer needed. This policy has the following thresholds: Procedure - Log in to the VMC Console at. - Click on the SDDC and then click Summary. - On the card for the SDDC or cluster, click Edit EDRS Settings. - Select the Elastic DRS policy you want to use.The Default Storage Scale-Out policy has no parameters. For other policies, specify a Minimum cluster size of 3 or more and a Maximum cluster size consistent with your expected workload resource consumption. The Maximum cluster size applies to CPU and Memory. To maintain storage capacity and ensure data durability, the service can add more hosts than what you specified in Maximum cluster size. - Click Save.
https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-961C4B32-6093-4C2E-AFE5-5B1F56BF4EEE.html
2021-04-10T15:24:03
CC-MAIN-2021-17
1618038057142.4
[]
docs.vmware.com
Enable the eBPF dataplane Big picture This guide explains how to enable the eBPF dataplane; a high-performance alternative to the standard (iptables based) dataplane for both Calico and kube-proxy. Value The eBPF dataplane mode has several advantages over standard linux networking pipeline mode: - It scales to higher throughput. - It uses less CPU per GBit. It. To learn more and see performance metrics from our test environment, see the blog, Introducing the Calico eBPF dataplane. Limitations eBPF mode currently has some limitations relative to the standard Linux pipeline mode: - eBPF mode only supports x86-64. (The eBPF programs are not currently built for the other platforms.) - eBPF mode does not yet support IPv6. - eBPF mode does not yet support host endpoint doNotTrackpolicy (but it does support normal, pre-DNAT and apply-on-forward policy for host endpoints). - When enabling eBPF mode, pre-existing connections continue to use the non-BPF datapath; such connections should not be disrupted, but they do not benefit from eBPF mode’s advantages. - Disabling eBPF mode is disruptive; connections that were handled through the eBPF dataplane may be broken and services that do not detect and recover may need to be restarted. - Hybrid clusters (with some eBPF nodes and some standard dataplane nodes) are not supported. (In such a cluster, NodePort traffic from eBPF nodes to non-eBPF nodes will be dropped.) This includes clusters with Windows nodes. - eBPF mode does not support floating IPs. - eBPF mode does not support SCTP, either for policy or services. - eBPF mode requires that node IP autodetection is enabled even in environments where Calico CNI and BGP are not in use. In eBPF mode, the node IP is used to originate VXLAN packets when forwarding traffic from external sources to services. - eBPF mode does not support the “Log” action in policy rules. Features This how-to guide uses the following Calico features: -. Before you begin… eBPF mode has the following pre-requisites: A supported Linux distribution: - Ubuntu 20.04 (or Ubuntu 18.04.4+, which has an updated kernel). - Red Hat v8.2 with Linux kernel v4.18.0-193 or above (Red Hat have backported the required features to that build). - Another supported distribution with Linux kernel v5.3 or above. If Calico does not detect a compatible kernel, Calico will emit a warning and fall back to standard linux networking. - On each node, the BPF filesystem must be mounted at /sys/fs/bpf. This is required so that the BPF filesystem persists when Calico is restarted. If the filesystem does not persist then pods will temporarily lose connectivity when Calico is restarted and host endpoints may be left unsecured (because their attached policy program will be discarded). For best pod-to-pod performance, an underlying network that doesn’t require Calico to use an overlay. For example: - A cluster within a single AWS subnet. - A cluster using a compatible cloud provider’s CNI (such as the AWS VPC CNI plugin). - An on-prem cluster with BGP peering configured. If you must use an overlay, we recommend that you use VXLAN, not IPIP. VXLAN has much better performance than IPIP in eBPF mode due to various kernel optimisations. - The underlying network must be configured to allow VXLAN packets between Calico hosts (even if you normally use IPIP or non-overlay for Calico traffic). In eBPF mode, VXLAN is used to forward Kubernetes NodePort traffic, while preserving source IP. eBPF mode honours the Felix VXLANMTUsetting (see Configuring MTU). - A stable way to address the Kubernetes API server. Since eBPF mode takes over from kube-proxy, Calico needs a way to reach the API server directly. - The base requirements also apply. Note: The default kernel used by EKS is not compatible with eBPF mode. If you wish to try eBPF mode with EKS, follow the Creating an EKS cluster for eBPF mode guide, which explain how to set up a suitable cluster. How to - Verify that your cluster is ready for eBPF mode - Configure Calico to talk directly to the API server - Configure kube-proxy - Enable eBPF mode - Try out DSR mode - Reversing the process Verify that your cluster is ready for eBPF mode This section explains how to make sure your cluster is suitable for eBPF mode. To check that the kernel on a node is suitable, you can run uname -rv The output should look like this: 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 In this case the kernel version is v5.4, which is suitable. On Red Hat-derived distributions, you may see something like this: 4.18.0-193.el8.x86_64 ([email protected]) Since the Red Hat kernel is v4.18 with at least build number 193, this kernel is suitable. To verify that the BPF filesystem is mounted, on the host, you can run the following command: mount | grep "/sys/fs/bpf" If the BPF filesystem is mounted, you should see: none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) If you see no output, then the BPF filesystem is not mounted; consult the documentation for your OS distribution to see how to make sure the file system is mounted at boot in its standard location /sys/fs/bpf. This may involve editing /etc/fstabor adding a systemdunit, depending on your distribution. If the file system is not mounted on the host then eBPF mode will work normally until Calico is restarted, at which point workload netowrking will be disrupted for several seconds. Configure Calico to talk directly to the API server In eBPF mode, Calico implements Kubernetes service networking directly (rather than relying on kube-proxy). This means that, like kube-proxy, Calico must connect directly to the Kubernetes API server rather than via the API server’s ClusterIP. First, make a note of the address of the API server: If you have a single API server with a static IP address, you can use its IP address and port. The IP can be found by running: kubectl get endpoints kubernetes -o wide The output should look like the following, with a single IP address and port under “ENDPOINTS”: NAME ENDPOINTS AGE kubernetes 172.16.101.157:6443 40m If there are multiple entries under “ENDPOINTS” then your cluster must have more than one API server. In that case, you should try to determine the load balancing approach used by your cluster and use the appropriate option below. - If using DNS load balancing (as used by kops), use the FQDN and port of the API server api.internal.<clustername>. - If you have multiple API servers with a load balancer in front, you should use the IP and port of the load balancer. Tip: If your cluster uses a ConfigMap to configure kube-proxyyou can find the “right” way to reach the API server by examining the config map. For example: $ kubectl get configmap -n kube-system kube-proxy -o jsonpath='{.data.kubeconfig}' | grep server` server: In this case, the server is d881b853aea312e00302a84f1e346a77.gr7.us-west-2.eks.amazonaws.comand the port is 443 (the standard HTTPS port). The next step depends on whether you installed Calico using the operator, or a manifest: If you installed Calico using the operator, create the following config map in the tigera-operator namespace using the host and port determined above: kind: ConfigMap apiVersion: v1 metadata: name: kubernetes-services-endpoint namespace: tigera-operator data: KUBERNETES_SERVICE_HOST: "<API server host>" KUBERNETES_SERVICE_PORT: "<API server port>" Wait 60s for kubelet to pick up the ConfigMap (see Kubernetes issue #30189); then, restart the operator to pick up the change: kubectl delete pod -n tigera-operator -l k8s-app=tigera-operator The operator will then do a rolling update of Calico to pass on the change. Confirm that pods restart and then reach the Running state with the following command: watch kubectl get pods -n calico-system If you do not see the pods restart then it’s possible that the ConfigMap wasn’t picked up (sometimes Kubernetes is slow to propagate ConfigMaps due the above issue). You can try restarting the operator again. If you installed Calico using a manifest, create the following config map in the kube-system namespace using the host and port determined above: kind: ConfigMap apiVersion: v1 metadata: name: kubernetes-services-endpoint namespace: kube-system data: KUBERNETES_SERVICE_HOST: "<API server host>" KUBERNETES_SERVICE_PORT: "<API server port>" kubectl delete pod -n kube-system -l k8s-app=calico-kube-controllers And, if using Typha: kubectl delete pod -n kube-system -l k8s-app=calico-typha Confirm that pods restart and then reach the Running state" Configure kube-proxy In eBPF mode Calico replaces kube-proxy so running both would waste resources. This section explains how to disable kube-proxy in some common environments. Clusters that run kube-proxy with a DaemonSet (such as kubeadm) For a cluster that runs kube-proxy in a DaemonSet (such as a kubeadm-created cluster), you can disable kube-proxy, reversibly, by. If you choose not to disable kube-proxy (for example, because it is managed by your Kubernetes distribution), then you must change Felix configuration parameter BPFKubeProxyIptablesCleanupEnabled to false. This can be done with calicoctl as follows: calicoctl patch felixconfiguration default --patch='{"spec": {"bpfKubeProxyIptablesCleanupEnabled": false}}' If both kube-proxy and BPFKubeProxyIptablesCleanupEnabled is enabled then kube-proxy will write its iptables rules and Felix will try to clean them up resulting in iptables flapping between the two. OpenShift If you are running OpenShift, you can disable kube-proxy as follows: kubectl patch networks.operator.openshift.io cluster --type merge -p '{"spec":{"deployKubeProxy": false}}' To re-enable it: kubectl patch networks.operator.openshift.io cluster --type merge -p '{"spec":{"deployKubeProxy": true}}' Enable eBPF mode To enable eBPF mode, change Felix configuration parameter BPFEnabled to true. This can be done with calicoctl, as follows: calicoctl patch felixconfiguration default --patch='{"spec": {"bpfEnabled": true}}' Enabling eBPF mode should not disrupt existing connections but existing connections will continue to use the standard Linux datapath. You may wish to restart pods to ensure that they start new connections using the BPF dataplane. Try out DSR mode Direct return mode skips a hop through the network for traffic to services (such as node ports) from outside the cluster. This reduces latency and CPU overhead but it requires the underlying network to allow nodes to send traffic with each other’s IPs. In AWS, this requires all your nodes to be in the same subnet and for the source/dest check to be disabled. DSR mode is disabled by default; to enable it, set the BPFExternalServiceMode Felix configuration parameter to "DSR". This can be done with calicoctl: calicoctl patch felixconfiguration default --patch='{"spec": {"bpfExternalServiceMode": "DSR"}}' To switch back to tunneled mode, set the configuration parameter to "Tunnel": calicoctl patch felixconfiguration default --patch='{"spec": {"bpfExternalServiceMode": "Tunnel"}}' Switching external traffic mode can disrupt in-progress connections. Reversing the process To revert-establish any connections disrupted by the switch. Send us feedback The eBPF dataplane is still fairly new, and we want to hear about your experience. Please don’t hesitate to connect with us via the Calico Users Slack group. - Big picture - Value - Limitations - Features - Concepts - Before you begin… - How to - Send us feedback
https://docs.projectcalico.org/maintenance/ebpf/enabling-bpf
2021-04-10T14:14:41
CC-MAIN-2021-17
1618038057142.4
[]
docs.projectcalico.org
Recommended Product Image Sizes Your Product images are displayed responsively so will vary in actual size according to the device the viewer is using. However to get the images to display "nicely" we recommend you use images with the following ratios: For default theme, try using an aspect ration of 12:7 In other words, use any number to multiply both 12 and 7 to get an image size. For example, 300X175 or 1200X700. For the Signature theme, try using an aspect ration of 9:6 So... use any number to multiply both 9 and 6. For example, 90X60, 180X120, or 900X600. For Teach PRO we recommend pixel 400x150. For Academy Theme, try using pixel 295x295. And for Vidfy, you can use pixel 2500x450.
https://docs.promotelabs.com/article/839-recommended-product-image-sizes
2021-04-10T15:00:48
CC-MAIN-2021-17
1618038057142.4
[]
docs.promotelabs.com
This magical component takes care of translating instructions from Direct3D 9/10/11 (from Windows) to Vulkan (to Linux) and is developed by doitsujin and many other contributors. Mainly dxvk is a .dll override bundle which, integrated in a wine prefix, allows you to detect and translate calls from DirectX to Vulkan. This is done automatically, without any user intervention. Basic DXVK is installed and enabled by Bottles when creating a bottle in the following environments: Gaming environment Software environment You can check its activation from the bottle preferences, checking the "DXVK for Direct3D" voice. When this component is enabled, a backup of the old dlls is created in the wineprefix and automatically recovered when disabled. You can keep this component updated from Bottles preferences. DXVK came with a large number of environment variables to better configure it. If you don't know what we're talking about, don't touch anything. Bottles preconfigure those that are essential for correct operation. DXVK_STATE_CACHE_PATH is preconfigured and points to the root path of the bottle DXVK is preconfigured to compiler otherwise is set to devinfo, memory, drawcalls, fps, version, api, compiler if enabled from the settings for Developers and Debug in the bottle Other variables can be found from the official repository and can be set using the Environment Variables field in the bottle preferences, like this: DXVK_HUD='pipelines,gpuload,memory' DXVK_STATE_CACHE=0
https://docs.usebottles.com/components/dxvk
2021-04-10T14:22:28
CC-MAIN-2021-17
1618038057142.4
[]
docs.usebottles.com
Due to the large number of events that may have transpired, the Event Manager may take a lengthy time to open. To enhance the opening of the Event Manager, you must periodically Archive and Purge the events. It is recommended that you archive all events that are older than 30 days, then purge these events. See the Network Configuration Manager Installation Guides for information on using the Archive and Purge Utility. The Event Manager feature allows you to view and manage activities that have transpired on the network. For example, you can access the log and view the Event, the Owner (or user), the Network that was accessed, the Date/Time the event was logged, and more! Events can be related to Device events, System events, and Security events. This feature is designed to assist you in maintaining security, as well as auditing and following the activities of events and users. Accessing the Event Manager You can access the Event Manager from the following system locations: From the menu bar, select Tools, then Event Manager . From the Devices view , select a device, then select Properties. The General tab contains access to the Event Manager. More about Event Manager, You can complete the following tasks within the Event Manager: Export Filter Refresh Select the columns you want displayed in the Event Manager. See Displaying Columns to review the list of columns available for you to display on each tab. Sort If an object (such as a network or a device) is deleted from the system, the event remains in the log. You can click within the Auto Resize check box to resize the width of the columns . To view more or less Events per page, select a number from the Page Size drop-down arrow. After selecting the pages you want to view, click Refresh to refresh the log screen. This page sizing allows you to maneuver between pages. The events that are logged are grouped into the following categories. You can select any one of these tabs when the Event Manager is displayed. Events differ for each category. All Events System Events Security Events Device Events
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.4/ncm-online-help-1014/GUID-F287973D-6EFA-4C52-B2DB-43E473756CCC.html
2021-04-10T15:18:43
CC-MAIN-2021-17
1618038057142.4
[array(['images/GUID-8494EAF0-A49D-478E-9A27-D38E8C077AD4-low.png', 'newdash'], dtype=object) array(['images/GUID-C8E2F275-E293-46CE-8C7C-8B514208F962-low.png', 'eventmanagernew'], dtype=object) ]
docs.vmware.com
Model Insertion command-line interface (MIC) is an application to assist modelers for adding through the steps required for encapsulating your model component and exposing a set of inputs and parameters of interest so they can be added on a Model Catalog Service. In addition, MIC also allows describing basic model metadata such as model version, model configuration, parameters, inputs, outputs, authors and contributors. MIC has been tested in OSX, Linux and Windows. It is installed through a simple pip command. Info MIC is an ALPHA version, which we are still testing and developing continuously. If you find an error or experience any issue, please report them here. What is a model components¶ Encapsulating software into components allows other users to easily access and run software in their own environments. Following well-established component-based software engineering principles, we want to create self-contained software components that only reveal functionality that is of interest to third parties. This is important because models are often implemented in large software packages or libraries that contain many tools and functions to run the model in many different ways, to prepare data, to visualize data, etc. It is hard to navigate all the possible functions, especially for those who are interested in sophisticated functionality that may be hard to achieve. Other models have graphical user interfaces that are convenient to use, but cannot be used for invoking the model from another program. A user interface button to run a model would call a specific function of the model software, and that function (sometimes called a command line invocation, or invocation function) is what we want to capture. That function is known as the component interface, and its inputs can be provided when invoking the component but all other data or parameters will be pre-set and internal to the component so none will be able to change them. Finally, for reproducibility reasons, we want to be able to record how a model execution was set up, which means having an explicit description of the specific function call that was used to run the model. These issues are addressed by encapsulating software. A model component corresponds to a single invocation function for model software. From a sophisticated model software package, a model component could be created to include only certain model processes and variables while excluding others. For example, from a hydrology model software package we could create a component for arid zones that includes infiltration processes but not snowmelt processes from the package. The invocation function for that configuration could have as input the recharge rates. How MIC Works¶ MIC guides you to create a model component and uploading it to the MINT Model Catalog so it is available to others in 9 simple steps. Below is an overview of the different steps in MIC. Requirements¶ MIC has the following requirements: - Python >= 3.6 - Docker Getting Python 3¶ MIC uses Python. Please, follow the steps bellow to install it: Docker¶ MIC uses Docker test and run model components. Installation¶ To install MIC, open a terminal and run: $ pip install mic You did it! If you want to verify the installation just type: $ mic version You should see a message similar to: mic v1.0.1 Limitations¶ Note that MIC has been designed to run Unix-based applications. Windows based applications (e.g., models that execute through an .exe) are not currently supported. Development version¶ If you want to install the latest development version, open a terminal and type: $ pip install git+ -U Note that the development version may be unstable. Issues, Troubleshooting and Feature Requests¶ Known issues with MIC are listed here. If you experience any issues when using MIC, or if you would like us to support additional exciting features, please open an issue on our GitHub repository. Code Releases and Next Updates¶ The latest release of MIC is available in GitHub. You can check the issues and updates we are working on for the next releases here.
https://mic-cli.readthedocs.io/en/latest/
2021-04-10T14:18:41
CC-MAIN-2021-17
1618038057142.4
[array(['figures/overview_01.png', 'Diagram'], dtype=object)]
mic-cli.readthedocs.io
Amazon SNS Connector. Prerequisites To be able to use the Amazon SNS Connector, you must have the following: Access to Amazon Web Services - SNS. To access AWS with the connector, you need the credentials in the form of IAM. Anypoint Studio version 7.0 (or higher) or Anypoint Design Center. Connector Global Element To use the Amazon SNS connector in your Mule application, configure a global Amazon SNS element that can be used by all the Amazon SNS connectors in the application. Configuring with Studio Visual Editor Click the Global Elements tab at the base of the canvas. In the Global Configuration Elements screen, click Create. Following window would be displayed. In the Choose Global Type wizard, expand Connector Configuration and select Amazon SNS Configuration and click Ok. Following window would be displayed. In the image above, the placeholder values refer to a configuration file placed in the srcfolder of your project. Configure the parameters according to instructions below. You can either enter your credentials into the global configuration properties, or reference a configuration file that contains these values. For simpler maintenance and better reusability SNS. Click OK to save the global connector configurations. Configuring with XML Editor or Standalone Ensure that you have included the Amazon SNS namespaces in your configuration file. Create a global Amazon SNS configuration outside and above your flows, using the following global configuration code. If you or your IAM users forget or lose the secret access key, you can create a new access key. Using This Connector Amazon SNS connector is an operation-based connector, which means that when you add the connector to your flow, you need to configure a specific operation for the connector to perform. The connector currently supports the following list of operations:, paste the namespace and schema into your Configuration XML. Use Cases and Demos. You can subscribe an Amazon SQS queue to an Amazon SNS topic using the AWS Management Console for Amazon SQS, which simplifies the process. Create a new Mule Project in Anypoint Studio. Add the below properties to mule-artifact.propertiesfile to hold your Amazon SNS and SQS credentials and place it in the project’s src/main/appdirectory. Click on a Mule HTTP Connector and select Listener operation, drag it to the beginning of the flow and configure the following parameters: Click on the Amazon SNS Connector and select the operation "Publish" and drag <sns:basic-connection <-details < > ReceiveMessages in the left side of the new flow and configure it according to the steps below: Click the plus sign next to the Connector Configuration field to add a new Amazon SQS Global Element. Configure the global element according to the table below: Your configuration should look like this (Queue URL can be skipped if Queue Name is specified): The corresponding XML configuration should be as follows: <sqs:config Make sure SQS-Queue that you mentioned in configuration should be subscribed to SNS-Topic. like: <sqs:receivemessages Add a Logger scope after the Amazon SQS connector to print the data that is being passed by the Receive operation in the Mule Console. Configure the Logger according to the table below. Save and run the project as a Mule Application. Right-click the project in Package Explorer. Run As > Mule Application. Open a web browser and check the response after entering the URL. The logger displays the published message ID on the browser and the received message on the mule console. See Also If you or your IAM users forget or lose the secret access key, you can create a new access key. More information about the keys in AWS documentation. Subscribe Queue to Amazon SNS Topic.
https://docs.mulesoft.com/connectors/amazon-sns-connector
2018-07-16T00:52:55
CC-MAIN-2018-30
1531676589029.26
[array(['./_images/amazon-sns-use-case-flow.png', 'Sending messages to SQS Queue'], dtype=object)]
docs.mulesoft.com
Crate cdrs cdrs is a native Cassandra DB client written in Rust. CDRS support traffic decompression as it is described in Apache Cassandra protocol The module contains Rust representation of Cassandra consistency levels. frame module contains general Frame functionality. frame This module contains a declaration of CDRSTransport trait which should be implemented for particular transport in order to be able using it as a trasport of CDRS client. CDRSTransport
https://docs.rs/cdrs/2.0.0-beta.1/cdrs/
2018-07-16T00:53:22
CC-MAIN-2018-30
1531676589029.26
[]
docs.rs
Users and groups¶ TYPO3 CMS features an access control system based on users and groups. Users¶ Each user of the backend must be represented with a single record in the table "be_users". This record contains the username and password, other meta data and some permissions settings. The above screenshot shows a part of the editing form for the backend user "simple_editor" from the Introduction Package. If you have an Introduction Package available, you can check further properties of that user. It is part of the "Simple editors" group, has a name, an email address and its default language for the backend is English. It is possible to assign rights directly to a user, but it is much better done using groups. Furthermore groups offer far more options. Groups¶ Each user can also be a member of one or more groups (from the "be_groups" table) and each group can include sub-groups. Groups contain the main permission settings you can set for a user. Many users can be a member of the same group and thus share permissions. When a user is a member of many groups (including sub-groups) then the permission settings are added together so that the more groups a user is a member of, the more access is granted to him. This screenshot shows just an extract of the group editing form. It contains many more fields! The "admin" user¶ There is a special kind of backend users called "Admin". When creating a backend user, just check the "Admin!" box in the "General" tab and that user will become an administrator. There's no need to set further access options for such a user: an admin user can access every single feature of the TYPO3 CMS backend, like the "root" user on a UNIX system. All systems must have at least one "admin" user and most systems should have only "admin" users for the developers - not for any editor. Make sure to not share TYPO3 accounts with multiple users but create dedicated accounts for everyone. Not even "super users" should be allowed "admin" access since that will most likely grant them access to more than they need. Admin users are differentiated with an orange icon. Note There's no other level between admin and ordinary users. This seems to be a strong limitation, especially when you consider that ordinary users may not access TypoScript templates. However, there is a security reason for this. From a TypoScript template, you can call a PHP script. So - in effect - a user with access to TypoScript can run arbitrary PHP code on the server, for example in order to create an admin account for himself. This type of escalation cannot be allowed. Location of users and groups¶ Since both backend users and backend groups are represented by records in the database, they are edited just as any other record in the system. However backend users and groups are configured to exist only in the root of the page tree where only admin users have access: Records located in the page tree root are identified by having their "pid" fields set to zero. The "pid" field normally contains the relation to the page where a record belongs. Since no pages can have the id of zero, this is the id of the root. Notice that only "admin" users can edit records in the page root! If you need non-admin users to create new backend users, have a look at the TYPO3 system extension sys_action for a possible solution.
https://docs.typo3.org/typo3cms/CoreApiReference/ApiOverview/AccessControl/UsersAndGroups/Index.html
2018-07-16T00:56:47
CC-MAIN-2018-30
1531676589029.26
[array(['../../../_images/AccessBackendUser.png', 'Part of the editing form for user "simple\\_editor" of the Introduction Package'], dtype=object) array(['../../../_images/AccessBackendGroup.png', 'Part of the editing form for group "Simple editors" of the Introduction Package'], dtype=object) array(['../../../_images/AccessBackendUserAdmin.png', 'In Web > List view, the different icon for admin users'], dtype=object) array(['../../../_images/AccessBackendUserList.png', 'Users and groups reside on the root page'], dtype=object)]
docs.typo3.org
Apache CloudStack uses Jira to track its issues and Github for pull requests. All new features and bugs for 4.9.0 have been merged through Github pull requests. A subset of these changes are tracked in Jira, which have a standard naming convention of “CLOUDSTACK-NNNN” where “NNNN” is the issue number.
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.9.0/fixed_issues.html
2018-07-16T00:57:45
CC-MAIN-2018-30
1531676589029.26
[]
docs.cloudstack.apache.org
Code author: Sang Han <[email protected]> translate. translator(source, target, phrase, version='0.0 test', charset='utf-8')[source]¶ Returns the url encoded string that will be pushed to the translation server for parsing. List of acceptable language codes for source and target languages can be found as a JSON file in the etc directory. Some source languages are limited in scope of the possible target languages that are available. >>> from translate import translator >>> translator('en', 'zh-TW', 'Hello World!') '你好世界!' translate. coroutine(func)[source]¶ Initializes coroutine essentially priming it to the yield statement. Used as a decorator over functions that generate coroutines. # Basic coroutine producer/consumer pattern from translate import coroutine @coroutine def coroutine_foo(bar): try: while True: baz = (yield) bar.send(baz) except GeneratorExit: bar.close() translate. push_url(interface)[source]¶ Decorates a function returning the url of translation API. Creates and maintains HTTP connection state Returns a dict response object from the server containing the translated text and metadata of the request body translate. source(target, inputstream=<_io.TextIOWrapper)[source]¶ Coroutine starting point. Produces text stream and forwards to consumers translate. spool(iterable, maxlen=1250)[source]¶ Consumes text streams and spools them together for more io efficient processes. translate. set_task(translator, translit=False)[source]¶ Task Setter Coroutine End point destination coroutine of a purely consumer type. Delegates Text IO to the write_stream function. translate. write_stream(script, output='trans')[source]¶ translate. accumulator(init, update)[source]¶ Generic accumulator function. # Simplest Form >>>>>>> c = functools.reduce(accumulator, a, b) >>> c 'this that' # The type of the initial value determines output type. >>> a = 5 >>> b = Hello >>> c = functools.reduce(accumulator, a, b) >>> c 10 translate. set_task(translator, translit=False)[source] Task Setter Coroutine End point destination coroutine of a purely consumer type. Delegates Text IO to the write_stream function. translate. translation_table(language, filepath='supported_translations.json')[source]¶ Opens up file located under the etc directory containing language codes and prints them out. translate. print_table(language)[source]¶ Generates a formatted table of language codes
http://py-translate.readthedocs.io/en/master/devs/api.html
2018-07-16T01:08:13
CC-MAIN-2018-30
1531676589029.26
[]
py-translate.readthedocs.io
AttachVolume about EBS volumes, see Attaching Amazon EBS Volumes in the Amazon Elastic Compute Cloud User Guide. Request Parameters The following parameters are for this specific action. For more information about required and optional parameters that are common to all actions, see Common Query Parameters. - Device The device name (for example, /dev/sdhor xvdh). Type: String Required: Yes - DryRun Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation. Type: Boolean Required: No - InstanceId The ID of the instance. Type: String Required: Yes - VolumeId The ID of the EBS volume. The volume and instance must be within the same Availability Zone. 1 This example request attaches the volume with the ID vol-1234567890abcdef0 to the instance with the ID i-1234567890abcdef0 and exposes it as /dev/sdh. Sample Request &VolumeId=vol-1234567890abcdef0 &InstanceId=i-1234567890abcdef0 &Device=/dev/sdh &AUTHPARAMS Sample Response <Attach>attaching</status> <attachTime>YYYY-MM-DDTHH:MM:SS.000Z</attachTime> </AttachVolumeResponse> See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AttachVolume.html
2018-07-16T01:18:02
CC-MAIN-2018-30
1531676589029.26
[]
docs.aws.amazon.com
Break on Exception If a fatal error occurs while your program is being debugged, PHP Tools breaks on it by default. A fatal error is an error which prevents the continuation of the script execution, e.g. a parse error or an unhandled exception. In this case the debugger can be used to inspect the current program state. If you continue or step, the exception will continue to be thrown until it is either handled or you exit the program. Some fatal errors (e.g. user-unhandled exception) are raised outside of the running context after the script has finished. The inspection of the program state, in this case, does not work because the script is not running anymore. You can choose to break on any exception immediately when thrown. These settings can be modified in the Exceptions dialog. On the Debug menu, click Exceptions, and expand the PHP Exceptions entry. Here you can see all the exceptions that are already known and can be configured. To configure an exception that does not appear in this list, click the Add button to add it. The name must match the full name of the exception precisely. The left-hand checkbox ("Thrown") for each exception controls whether the debugger always breaks when it is raised. You should check this box when you want to break more often for a particular exception. Common issues Stepping through code works, but PHP exceptions are not thrown in Visual Studio Check your php.ini for xdebug.default_enable directive and make sure it is set to 1 (this is a default value).
https://docs.devsense.com/en/debugging/exceptions
2018-07-16T00:41:43
CC-MAIN-2018-30
1531676589029.26
[array(['https://docs.devsense.com/content_docs/debugging/imgs/parse-error.png', 'Parse error'], dtype=object) array(['https://docs.devsense.com/content_docs/debugging/imgs/exceptions-configuration.png', 'Exceptions configuration dialog'], dtype=object) ]
docs.devsense.com
Content with label eap6 in JBoss AS 7.1 (See content from all spaces) Related Labels: high, wildfly, cluster, jboss, tutorial, mod_jk, domain, httpd, eap, ha, load, modcluster, mod_cluster, getting_started, balancing, as7, availability Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/AS71/eap6
2018-07-16T00:48:25
CC-MAIN-2018-30
1531676589029.26
[]
docs.jboss.org
Backend User Object¶ one other global variables is of interest - $FILEMOUNTS, holding an array with the File mounts of the $BE_USER. Checking user access¶ The $BE_USER object is mostly used to check user access right, but contains other helpful information. This is presented here by way of a few examples: Checking access to current backend module¶ ¶ If you know the module key you can check if the module is included in the access list by this function call: $BE_USER->check('modules', 'web_list'); Here access to the module "Web > List" is checked. Access to tables and fields?¶"?¶ If you want to know if a user is an "admin" user (has complete access), just call this method: $BE_USER->isAdmin(); Read access to a page?¶?¶?¶¶ This stores the input variable $compareFlags (an array!) with the key "tools_beuser/index.php/compare" $compareFlags = \TYPO3\CMS\Core\Utility\GeneralUtility::_GP('compareFlags'); $BE_USER->pushModuleData('tools_beuser/index.php/compare', $compareFlags); Getting module data¶ This gets the module data with the key "tools_beuser/index.php/compare" (lasting only for the session) $compareFlags = $BE_USER->getModuleData('tools_beuser/index.php/compare', 'ses'); Getting TSconfig¶ This function can return a value from the "User TSconfig" structure of the user. In this case the value for "options.clipboardNumberPads": $tsconfig = $BE_USER->getTSConfig(''); $clipboardNumberPads = $tsconfig['options.clipboardNumberPads'] ?? ''; Getting the username¶ The full "be_users" record of a authenticated user is available in $BE_USER->user as an array. This will return the "username": $BE_USER->user['username'] Get User Configuration value¶ The internal ->uc array contains options which are managed by the User Tools > User Settings module (extension "setup"). These values are accessible in the $BE_USER->uc array. This will return the current state of "Notify me by email, when somebody logs in from my account" for the user: $BE_USER->uc['emailMeAtLogin']
https://docs.typo3.org/typo3cms/CoreApiReference/ApiOverview/BackendUserObject/Index.html
2018-07-16T00:42:02
CC-MAIN-2018-30
1531676589029.26
[]
docs.typo3.org
Hosts and virtual machines in a View deployment must meet specific hardware and operating system requirements. View Composer RequirementsWith View Composer, you can deploy multiple linked-clone desktops from a single centralized base image. View Composer has specific installation and storage requirements. View Connection Server RequirementsView Connection Server acts as a broker for client connections by authenticating and then directing incoming user requests to the appropriate remote desktops and applications. View Connection Server has specific hardware, operating system, installation, and supporting software requirements. View Administrator RequirementsAdministrators use View Administrator to configure View Connection Server, deploy and manage remote desktops and applications, control user authentication, initiate and examine system events, and carry out analytical activities. Client systems that run View Administrator must meet certain requirements. Horizon Client RequirementsHorizon Client runs on many types of devices: Windows, Mac, and Linux desktops and laptops; Linux thin and zero clients; tablets; and phones. All of these devices have specific requirements. Supported Operating Systems for View AgentThe View Agent component assists with session management, single sign-on, device redirection, and other features. You must install View Agent on all virtual machines, physical systems, and RDS hosts.
https://docs.vmware.com/en/VMware-Horizon-6/6.1/com.vmware.horizon-view.upgrade.doc/GUID-0CCC3C1B-44E4-410E-966F-EAC4AB42D10B.html
2018-07-16T01:23:43
CC-MAIN-2018-30
1531676589029.26
[]
docs.vmware.com
Putnam: (860) 928-7330 Charlton: (508) 980-7074 Patient Portal Our physician is board certified by THE AMERICAN BOARD OF OTOLARYNGOLOGY. Dr. Charon earned his undergraduate degree from the University of Notre Dame and attended the Georgetown University School of Medicine on a Navy Health Professions Scholarship. He completed his surgical internship at the Naval Medical Center San Diego and then served as a General Medical Officer providing medical support for the Marines at Camp Pendleton, California for two years. Doctor Charon took his specialty training in Otolaryngology at the Naval Medical Center, San Diego. He began practice in 2001 at the U.S. Naval Hospital Okinawa in Japan where he served as Department Head of Otolaryngology. He left the Navy as a Lieutenant Commander after ten years of service to join our practice in 2003. We detected that your JavaScript seem to be disabled. You must have JavaScript enabled in your browser to utilize the functionality of this website.
http://www.ent-docs.com/ent-physicians/
2018-07-16T00:34:33
CC-MAIN-2018-30
1531676589029.26
[]
www.ent-docs.com
Updated on 30-06-2018 Consumer is a SONM's user, which buys and uses the computing resources of other users (Suppliers) through the system. The purpose of the Consumer is to perform a specific task in an optimal way (most quickly, most cheaply, by the price / speed criterion, most reliably, or others). Every Consumer has his own Ethereum account, that is his unique identifier for: Consumer should have SNM tokens in SONM blockchain to rent computational resources.> Important! If you have participated in SONM Testnet, you should uninstall previous version of SONM software. If you already running SONM in Livenet and just willing to update your software, pleade DO NOT run uninstall script. You will loose your keystore vault if run this script. Please UNINSTALL SONM with script: curl -s | sudo bash Install SONM Componnets. We recommend to use auto-installation script. sudo bash -c "$(curl -s)" cat ~/.sonm/cli.yaml). To rent hardware in SONM, you should: Order and deal prices on SONM marketplace are in USD. Payments are executed in SNM tokens by actual exchange SNM/USD exchange price. When you open a deal, SNM tokens from your address are transfered to the marketplace smart contract. You may see your deals on Market/Deals page. You may see deal details by click on the deal. To run the task, you should use SONM CLI. sonmcli deal list. sonmcli task start <deal_ID> <task.yaml> You may see task specification examples on our GitHub. You may run your custom task only if you: - renting your own hardware (Buyer ethereum address equals to Supplier ethereum address within a deal). - OR whitelist check is disabled in Supplier's worker configuration. - OR you have passed KYC certification and have 'Identified' identity level in your profile. See SONM CLI Guide for additional information about task management. You may close the deal at any time with sonmcli deal close <deal_ID>, or using the SONM GUI. Supplier will receive payment for certain deal duration. The rest of SNM tokens freezed on the deal will return to your address. You may see deal details with sonmcli deal status <deal_ID>.
https://docs.sonm.io/getting-started/as-a-consumer
2018-07-16T00:36:36
CC-MAIN-2018-30
1531676589029.26
[]
docs.sonm.io
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs. Basic. , use the <User>element to specify the variable containing the username. Username and password values are concatenated with a colon prior to Base64 encoding. - For decoding, specify the variable where the decoded username is written. <User ref="request.queryparam.username" /> 属性 <Password> element - For encoding, use the <Password>element to specify the variable containing the password. - For decoding, specify the variable where the decoded password is written. <Password ref="request.queryparam.password" /> 属性 <AssignTo> element For encoding, this is the variable where the encoded value, in the form Basic Base64EncodedString is written. For example, request.header.Authorization, corresponding to the Authorization header. <AssignTo createNew="false">request.header.Authorization</AssignTo> 属性 <Source> element For decoding, the variable containing the Base64 encoded string, in the form Basic Base64EncodedString. For example, specify request.header.Authorization, corresponding to the Authorization header. reference This section describes the error messages and flow variables that are set when this policy triggers an error. This information is important to know if you are developing fault rules to handle errors. To learn more, see What you need to know about policy errors and Handling faults. Error code prefix steps.basicauthentication (What's this?).basicauthentication.UnresolvedVariable" }, "faultstring":"Unresolved variable : request.queryparam.password" } } Example fault rule <FaultRule name="Basic Authentication Faults"> <Step> <Name>AM-UnresolvedVariable</Name> <Condition>(fault.name Matches "UnresolvedVariable") </Condition> </Step> <Step> <Name>AM-AuthFailedResponse</Name> <Condition>(fault. <Step> <Name>AM-UnresolvedVariable</Name> <Condition>(fault.name Matches "UnresolvedVariable") </Condition> </Step> <Step> <Name>AM-AuthFailedResponse</Name> <Condition>(fault.name = "InvalidBasicAuthenticationSource")</Condition> </Step> <Condition>(BasicAuthentication.BA-Authentication.failed = true) </Condition> </FaultRule> Schemas See our GitHub repository samples for the most recent schemas. 関連トピック Key Value Map Operations policy Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://ja.docs.apigee.com/api-services/reference/basic-authentication-policy
2017-09-19T20:52:38
CC-MAIN-2017-39
1505818686034.31
[array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/icon_policy_threat-protection.jpg', None], dtype=object) ]
ja.docs.apigee.com
#include <URL.h> class URL { URL(std::string uri); URL(const URL& rhs); URL& operator=(const URL& url); const bool operator==(const URL& rhs); const bool operator!=(const URL& rhs); const std::string getProto(); const std::string getHostName(); int getPort(); const std::string getPath(); const std::string operator std::string();} Objects of this class parse Universal Resource Identifiers (URIs). Methods of this class return the components of a URI. Constructors. URL(std::string uri); Constructs a URL object by parsing uri. The constructor may throw a CURIFormatException exception. see Exceptions below for more information. URL(const URL& rhs); Constructs a URL object that is an exact duplicate of the rhs object. Other canonical functions. The URL class implements assignment, and comparison for equality and inequality. Equality holds if all the components of the parsed URI are identical. Inequality holds if equality does not hold. Other methods. const std::string getProto(); Returns the protocol component of the URI. The protocol component describes the mechanism used to access the resource. const std::string getHostName(); Returns the hostname component of the URI. The hostname describes where in the network the resource described by the URI is located. int getPort(); Returns the port number part of the URI. While port numbers are optional on real URI's they are not optional for NSCL URIs. The port determines where the server for the resource is listening for connections. const std::string getPath(); Returns the path component of the URI. The path component tells the client and server where within the namespaces for the protocol the component is located. The path component is optional. If not provided, it defaults to /. const std::string operator std::string(); Re-constructs and returns the stringified URL. This should be very close to the string that was used to construct this object, or the object from which the object was copied. Not all strings are valid URIs. If a URL object is constructed with a string that is not a valid URI, the constructor will throw a CURIFormatException. CURIFormatException is derived from the CExeption common exception base class. The NSCL Exception class library chapter describes the exception class hierarchy, how to use it and its common set of interfaces. The CException reference page describes the CException class. The CURIFormatException reference page describes the CUIRFormatException class.
http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.2/r39161.html
2017-09-19T20:28:51
CC-MAIN-2017-39
1505818686034.31
[]
docs.nscl.msu.edu
Flex ads Last updated on August 23,, such as flexible sizes only for certain refreshes, use the setFlexAdSize parameter: /* set predefined flex adsize here. Please refer 'AdConfiguration.OXMAdSize' for more OpenX predefined flex ad sizes */ adView.setFlexAdSize(AdConfiguration.OXMAdSize.BANNER_320x50); //or set your custom flex adsize as a string adView.setFlexAdSize("320x50,400x350"); you do not set these values programmatically, then the values set in the UI will be used for bid requests. The supported ad sizes are as follows: AdConfiguration.OXMAdSize.BANNER_320x50 AdConfiguration.OXMAdSize.BANNER_300x250 AdConfiguration.OXMAdSize.BANNER_320x50_300x250 AdConfiguration.OXMAdSize.INTERSTITIAL_320x480 AdConfiguration.OXMAdSize.INTERSTITIAL_300x250 AdConfiguration.OXMAdSize.INTERSTITIAL_480x320 AdConfiguration.OXMAdSize.INTERSTITIAL_768x1024 AdConfiguration.OXMAdSize.INTERSTITIAL_1024x768 //Flexible ad sizes for portrait, phone AdConfiguration.OXMAdSize.INTERSTITIAL_320x480_300x250 //Flexible ad sizes for landscape, phone AdConfiguration.OXMAdSize.INTERSTITIAL_480x320_300x250 //Flexible ad sizes for portrait, tablet AdConfiguration.OXMAdSize.INTERSTITIAL_768x1024_320x480_300x250 //Flexible ad sizes for landscape, tablet AdConfiguration.OXMAdSize.INTERSTITIAL_1024x768_480x320_300x250
https://docs.openx.com/Content/developers/android-sdk/android-sdk-flex-ads.html
2017-09-19T20:44:01
CC-MAIN-2017-39
1505818686034.31
[]
docs.openx.com
How to install R¶ Introduction to R¶ This little booklet has some information on how to use R for time series analysis. R () is a commonly used free Statistics software. R allows you to carry out statistical analyses in an interactive mode, as well as allowing simple programming. Installing R¶ To use R, you first need to install the R program on your computer. How to check if R is installed on a Windows PC¶: - Check if there is an “R” icon on the desktop of the computer that you are using. If so, double-click on the “R” icon to start R. If you cannot find an “R” icon, try step 2 instead. - Click on the “Start” menu at the bottom left of your Windows desktop, and then move your mouse over “All Programs” in the menu that pops up. See if “R” appears in the list of programs that pops up. If it does, it means that R is already installed on your computer, and you can start R by selecting “R” (or R X.X.X, where X.X.X gives the version of R, eg. R 2.10.0) from the list.. Finding out what is the latest version of R¶). Installing R on a Windows PC¶ To install R on your Windows computer, follow these steps: - Go to. - Under “Download and Install R”, click on the “Windows” link. - Under “Subdirectories”, click on the “base” link. - On the next page, you should see a link saying something like “Download R 2.10.1 for Windows” (or R X.X.X, where X.X.X gives the version of R, eg. R 2.11.1). Click on this link. - You may be asked if you want to save or run a file “R-2.10.1-win32.exe”. Choose “Save” and save the file on the Desktop. Then double-click on the icon for the file to run it. - You will be asked what language to install it in - choose English. - The R Setup Wizard will appear in a window. Click “Next” at the bottom of the R Setup wizard window. - The next page says “Information” at the top. Click “Next” again. - The next page says “Information” at the top. Click “Next” again. - The next page says “Select Destination Location” at the top. By default, it will suggest to install R in “C:\Program Files” on your computer. - Click “Next” at the bottom of the R Setup wizard window. - The next page says “Select components” at the top. Click “Next” again. - The next page says “Startup options” at the top. Click “Next” again. - The next page says “Select start menu folder” at the top. Click “Next” again. - The next page says “Select additional tasks” at the top. Click “Next” again. - R should now be installed. This will take about a minute. When R has finished, you will see “Completing the R for Windows Setup Wizard” appear. Click “Finish”. - To start R, you can either follow step 18, or 19: - Check if there is an “R” icon on the desktop of the computer that you are using. If so, double-click on the “R” icon to start R. If you cannot find an “R” icon, try step: How to install R on non-Windows computers (eg. Macintosh or Linux computers)¶). Installing R packages¶. How to install an R package¶ Once you have installed R on a Windows computer (following the steps above), you can install an additional package by following the steps below: -, you can now install an R package (eg. the “rmeta” package) by choosing “Install package(s)” from the “Packages” menu at the top of the R console. This will ask you what website you want to download the package from, you should choose “Ireland” (or another country, if you prefer). It will also bring up a list of available packages that you can install, and you should choose the package that you want to install from that list (eg. “rmeta”). - This will install the “rmeta” package. - The “rmeta” package is now installed. Whenever you want to use the “rmeta” package after this, after starting R, you first have to load the package by typing into the R console: >). How to install a Bioconductor R package¶, now type in the R console: > source("") > biocLite() - This will install a core set of Bioconductor”). This takes a few minutes (eg. 10 minutes). - At a later date, you may wish to install some extra Bioconductor packages that do not belong to the core set of Bioconductor packages. For example, to install the Bioconductor package called “yeastExpData”, start R and type in the R console: > source("") > biocLite("yeastExpData") - Whenever you want to use a package after installing it, you need to load it into R by typing: > library("yeastExpData") Running R¶. - Click on the “Start” button at the bottom left of your computer screen, and then choose “All programs”, and start R by selecting “R” (or R X.X.X, where X.X.X gives the version of R, eg. R 2.10.0) from the menu of programs. This should bring up a new window, which is the R console. A brief introduction to R¶() Links and Further Reading¶. Acknowledgements¶ License¶ The content in this book is licensed under a Creative Commons Attribution 3.0 License.
http://a-little-book-of-r-for-bayesian-statistics.readthedocs.io/en/latest/src/installr.html
2017-09-19T20:40:56
CC-MAIN-2017-39
1505818686034.31
[array(['../_images/image3.png', 'image3'], dtype=object)]
a-little-book-of-r-for-bayesian-statistics.readthedocs.io
. This function returns a handle to a sound file. Please note that you are responsible for unloading (cleaning up) any audio files you load with this API. Use the audio.dispose() API to clean up audio handles you are completely done with them and want to unload from memory to get back more RAM. In many usage cases, you may want to use the audio file for the entire program in which case you do not need to worry about disposing of the resource." )
http://docs.coronalabs.com.s3-website-us-east-1.amazonaws.com/api/library/audio/loadSound.html
2017-09-19T20:35:07
CC-MAIN-2017-39
1505818686034.31
[]
docs.coronalabs.com.s3-website-us-east-1.amazonaws.com
The ring pipe utilities are intended to provide mechanisms for non ringbuffer aware software to access ring buffers. These utilities are also used by the high level access libraries to provide access to remote ringbuffers via proxy rings. There are two ring pipe utilities: Accepts data from a ring buffer and transmits it to the program's stdout file descriptor. Accepts data from the program's stdin file descriptor and places it in the ring buffer. The ringtostdout command can be used to hoist data out of ring buffers to a command or filter that is not aware of the NSCL Data acquisition system. It is a key element of the pipeline that ships data to SpecTcl, as well as a key component of the networked data distribution system (with stdout redirected to a socket). For detailed usage information see the ringtostdout reference pages. The stdintoring command can be used to allow a non NSCLDAQ aware program or pipeline to be a producer for a ring buffer. This command is also a key component of the networked data distribution system (with stdin redirected to a socket). For detailed information, see the stdintoring reference material.
http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.2/c6914.html
2017-09-19T20:38:44
CC-MAIN-2017-39
1505818686034.31
[]
docs.nscl.msu.edu
Azure Active Directory is a Microsoft Azure service which provides identity and access management. OpsGenie supports single sign on with Azure AD, which means your organization can easily incorporate OpsGenie into your application base in Azure AD and let your users securely access OpsGenie. For general information about OpsGenie's Single Sign-On feature, refer to the Single Sign-On with OpsGenie document. This document describes the specific instructions you can use to integrate Azure Active Directory with OpsGenie SSO. To configure Single Sign-On integration between your Azure Active Directory and OpsGenie accounts, go to OpsGenie SSO page, select "Azure AD" as provider and follow the instructions below: - On another tab or page, open your Azure Portal and navigate to *Active Directory list. - Click the directory in which the OpsGenie application will be added and navigate to the Applications tab in your directory. - Click ADD button that is at the bottom panel. - Select Add an application my organization is developing. - On the next screen, give a name for the application and select WEB APPLICATION AND/OR WEB API as type. - Write to SIGN-ON URL field and to APP ID URI field. Then, click the tick mark at the bottom right corner to save the application. - Navigate to the application you have recently added in the directory. Click VIEW ENDPOINTS button that is at the bottom panel. - On the App Endpoints screen, copy the URL at the FEDERATION METADATA DOCUMENT field - Switch to OpsGenie SSO Settings page that you have opened at the beginning and paste the certificate value into Metadata URL field. - Switch back to Azure AD App Endpoints screen and copy the URL at the SAML-P SIGN-ON ENDPOINT field. Paste this URL into SAML 2.0 Endpoint field at your OpsGenie SSO Settings page. - Click Save Changes on your OpsGenie SSO Settings page. - On OpsGenie SSO Settings page, copy the single sign-on URL that is generated for you. - Switch back to the application that you have added to your Azure Portal. Switch to CONFIGURE tab. - Paste the single sign-on URL that you have recently copied into REPLY URL field under the single sign-on section. Click SAVE that is at the bottom panel and wait until your configuration is saved. - Now users in your active directory can login with OpsGenie via SSO using their directory credentials. ** Make sure that email addresses of users are exactly same on both OpsGenie and your Azure Active Directory. - If you turn on the setting USER ASSIGNMENT REQUIRED TO ACCESS APP for the application you have added to your Azure Active Directory, you explicitly have to provide access for OpsGenie application to users in your directory. To give access to your users for OpsGenie, switch to USERS tab within the application you have created on Azure Active Directory. Select a user you want to give access and click ASSIGN at the bottom panel. Please note: Provisioning is not available for Azure Active Directory.
https://docs.opsgenie.com/docs/azure-active-directory-sso
2017-09-19T20:47:55
CC-MAIN-2017-39
1505818686034.31
[]
docs.opsgenie.com
SQLAlchemy 1.2 Documentation SQLAlchemy 1.2 Documentation pre release SQLAlchemy Core - SQL Expression Language Tutorial - SQL Statements and Expressions API - Schema Definition Language - Describing Databases with MetaData - Reflecting Database Objects - Column Insert/Update Defaults¶ - Scalar Defaults - Python-Executed Functions - SQL Expressions - Server Side Defaults -.access Side Defaults¶ A variant on the SQL expression default is the Column.server_default, which gets placed in the CREATE TABLE statement during a Table Column. See also - class sqlalchemy.schema. PassiveDefault(*arg, **kw)¶ Bases: sqlalchemy.schema.DefaultClause A DDL-specified DEFAULT column value. Deprecated since version 0.6: PassiveDefaultis deprecated.()¶ Return a next_valuefunction element which will render the appropriate increment function for this Sequencewithin any SQL expression.
http://docs.sqlalchemy.org/en/latest/core/defaults.html
2017-09-19T20:30:03
CC-MAIN-2017-39
1505818686034.31
[]
docs.sqlalchemy.org
Apps using a Service Instance - Monitor Quota Saturation and Service Instance Count - Knowledge Base (Community) - File a Support Ticket Instructions on interacting with the on-demand service broker and on-demand service instance BOSH deployments, and on performing general maintenance and housekeeping tasks Parse a Cloud Foundry (CF) Error Message Failed operations (create, update, bind, unbind, delete) result in an error message. You can retrieve the error message later by running the cf CLI command cf service INSTANCE-NAME. $ cf service myservice Service instance: myservice Service: super-db Bound apps: Tags: Plan: dedicated-vm Description: Dedicated Instance Documentation url: Dashboard: Last Operation Status: create failed Message: Started: 2017-03-13T10:16:55Z Updated: 2017-03-13T10:17:58Z Use the information in the Message field to debug further. Provide this information to Pivotal Support when filing a ticket. The task-id field maps to the BOSH task id. For further information on a failed BOSH task, use the bosh task TASK-ID command in the BOSH CLI. The broker-request-guid maps to the portion of the On-Demand Broker log containing the failed step. Access the broker log through your syslog aggregator, or access BOSH logs for the broker by typing bosh logs broker 0. If you have more than one broker instance, repeat this process for each instance. Access Broker and Instance Logs and VMs Before following the procedures below, log into the cf CLI and the BOSH CLI. Access Broker Logs and VM(s) - GUID of your service instance with the cf CLI command cf service MY-SERVICE --guid. To download your BOSH manifest for the service, run bosh download manifest service-instance_SERVICE-INSTANCE-GUID MY-SERVICE.ymlusing the GUID you just obtained and a filename you want to save the manifest as. Run bosh deployment MY-SERVICE.yml to select the deployment. Run bosh instancesto view VMs in the deployment. Run bosh ssh INSTANCE-IDto SSH onto the VM. Run bosh logs INSTANCE-IDto download instance logs. Run Service Broker Errands to Manage Brokers and Instances From the BOSH CLI, you can run service broker errands that manage the service brokers and perform mass operations on the service instances that the brokers created. These service broker errands include: register-brokerregisters a broker with the Cloud Controller and lists it in the Marketplace deregister-brokerderegisters a broker with the Cloud Controller and removes it from the Marketplace upgrade-all-service-instancesupgrades existing instances of a service to its latest installed version delete-all-service-instancesdeletes all instances of service orphan-deploymentsdetects “orphan” instances that are running on BOSH but not registered with the Cloud Controller whenever the broker is re-deployed with new catalog metadata to update the Cloud Foundry catalog. Plans with disabled service access are not visible to non-admin Cloud Foundry users (including Org Managers and Space Managers). Admin Cloud Foundry users can see all plans including those with disabled service access. The errand does the following: - Registers the service broker with Cloud Controller - Enables service access for any plans that have the radio button set to enabledin the tile plan page. - Disables service access for any plans that have the radio button set to disabledin the tile plan page. - Does nothing for any for any plans that have the radio button set to manual may errand does the following: - Collects all of the service instances the on-demand broker has registered. - For each instance the errand serially: - Issues an upgrade command to the on-demand broker. - Re-generates the service instance manifest based on its latest configuration from the tile. - Deploys the new manifest for the service instance. - Waits for this operation to complete, then proceeds to the next instance. -. Delete All Service Instances This errand deletes all service instances of your broker’s service offering in every org and space of Cloud Foundry. It uses the Cloud Controller API to do this, and therefore only deletes instances the Cloud Controller knows about. It will newly-created instances are detected, the errand fails. WARNING:. Detect Orphaned Instances Service Instances A service instance is defined as ‘orphaned’ when the BOSH deployment for the instance is still running, but the service is no longer registered in Cloud Foundry. The orphan-deployments errand collates a list of service deployments that have no matching service instances in Cloud Foundry and return the list to the operator. It is then up to the operator to remove the orphaned bosh deployments._name":"service-instance_80e3c5a7-80be-49f0-8512-44840f3c4d1b"}] [stderr] Orphan BOSH deployments detected with no corresponding service instance in Cloud Foundry. Before deleting any deployment it is recommended to verify the service instance no longer exists in Cloud Foundry and any data is safe to delete. Errand 'orphan-deployments' completed with error (exit code 10) These details will also be available through the BOSH /tasks/ API endpoint for use in scripting: $ curl '' | jq . { "exit_code": 10, "stdout": "[{"deployment_name":"service-instance_80e3c5a7-80be-49f0-8512-44840f3c4d1b"}]\n", "stderr": "Orphan BOSH deployments detected with no corresponding service instance in Cloud Foundry. Before deleting any deployment it is recommended to verify the service instance no longer exists in Cloud Foundry and any data is safe to delete.\n", "logs": { "blobstore_id": "d830c4bf-8086-4bc2-8c1d-54d3a3c6d88d" } } If no orphan deployments exist, the errand script will: - Exit with exit code 0 - Stdout will be an empty list of deployments - Stderr will be None [stdout] [] [stderr] None Errand 'orphan-deployments' completed successfully (exit code 0) If the errand encounters an error during running it will: - Exit with exit 1 - Stdout will be empty - Any error messages will be under stderr To clean up orphaned instances, perform the following action on each: $ bosh delete deployment service-instance_SERVICE-INSTANCE-GUID WARNING: This may leave IaaS resources in an unusable state. Select the BOSH Deployment for a Service Instance to Apps using a Service Instance If you want to identify which apps are using a specific service instance from the BOSH deployments name, you can run the following steps: - Take the deployment name and strip the service-instance_leaving you with the GUID. - Login to CF as an admin. - Obtain a list of all service bindings by running the following: cf curl /v2/service_instances/<GUID>/service_bindings - The output from the above curl will give you a list of resources, with each item referencing a service binding, which contains the app_url. To find the name, org and space for the app, run the following: cf curl <app_url>and note the app name under entity.name cf curl <space_url>to obtain the space, using the entity.space_urlfrom the above curl. Note the space name under entity.name cf curl <organization_url>to obtain the org, using the entity.organization_urlfrom the above curl. Note the organization name under entity.name Note: When running cf curl ensure that you query all pages, as the responses are limited to a certain number of bindings per page (default is 50). To find the next page simply curl the value under next_url Monitor Quota Saturation and Service Instance Count Quota saturation and total number of service instances are available through ODB metrics emitted to Loggregator. The metric names are shown below: Note: Quota metrics are not emitted if no quota has been set..
https://docs.pivotal.io/svc-sdk/odb/0-17/tshooting-techniques.html
2017-09-19T20:36:55
CC-MAIN-2017-39
1505818686034.31
[]
docs.pivotal.io
Sample Null Transform Plugin¶ This section provides a step-by-step description of what the null transform plugin does, along with sections of code that apply. For context, you can find each code snippet in the complete source code. Some of the error checking details are left out - to give the description a step-by-step flow, only the highlights of the transform are included. Below is an overview of the null transform plugin: Gets a handle to HTTP transactions. void TSPluginInit (int argc, const char *argv[]) { TSHttpHookAdd (TS_HTTP_READ_RESPONSE_HDR_HOOK, TSContCreate (transform_plugin, NULL)); With this TSPluginInitroutine, the plugin is called back every time Traffic Server reads a response header. Checks to see if the transaction response is transformable. static int transform_plugin (TSCont contp, TSEvent event, void *edata) { TSHttpTxn txnp = (TSHttpTxn) edata; switch (event) { case TS_EVENT_HTTP_READ_RESPONSE_HDR: if (transformable (txnp)) { transform_add (txnp); } The default behavior for transformations is to cache the transformed content (you can also tell Traffic Server to cache untransformed content, if you want). Therefore, only responses received directly from an origin server need to be transformed. Objects served from cache are already transformed. To determine whether the response is from the origin server, the routine transformablechecks the response header for the “200 OK” server response. static int transformable (TSHttpTxn txnp) { TSMBuffer bufp; TSMLoc hdr_loc; TSHttpStatus resp_status; TSHttpTxnServerRespGet (txnp, &bufp, &hdr_loc); if (TS_HTTP_STATUS_OK == (resp_status = TSHttpHdrStatusGet (bufp, hdr_loc)) ) { return 1; } else { return 0; } } If the response is transformable, then the plugin creates a transformation vconnection that gets called back when the response data is ready to be transformed (as it is streaming from the origin server). static void transform_add (TSHttpTxn txnp) { TSVConn connp; connp = TSTransformCreate (null_transform, txnp); TSHttpTxnHookAdd (txnp, TS_HTTP_RESPONSE_TRANSFORM_HOOK, connp); } The previous code fragment shows that the handler function for the transformation vconnection is null_transform. Get a handle to the output vconnection (that receives data from the tranformation). output_conn = TSTransformOutputVConnGet (contp); Get a handle to the input VIO. (See the handle_transformfunction.) input_vio = TSVConnWriteVIOGet (contp); This is so that the transformation can get information about the upstream vconnection’s write operation to the input buffer. Initiate a write to the output vconnection of the specified number of bytes. When the write is initiated, the transformation expects to receive WRITE_READY, WRITE_COMPLETE, or ERRORevents from the output vconnection. See the handle_transformfunction for the following code fragment: data->output_vio = TSVConnWrite (output_conn, contp, data->output_reader, TSVIONBytesGet (input_vio)); Copy data from the input buffer to the output buffer. See the handle_transformfunction for the following code fragment: TSIOBufferCopy (TSVIOBufferGet (data->output_vio), TSVIOReaderGet (input_vio), towrite, 0); Tell the input buffer that the transformation has read the data. See the handle_transformfunction for the following code fragment: TSIOBufferReaderConsume (TSVIOReaderGet (input_vio), towrite); Modify the input VIO to tell it how much data has been read (increase the value of ndone). See the handle_transformfunction for the following code fragment: TSVIONDoneSet (input_vio, TSVIONDoneGet (input_vio) + towrite); If there is more data left to read ( if ndone < nbytes), then the handle_transformfunction wakes up the downstream vconnection with a reenable and wakes up the upstream vconnection by sending it WRITE_READY: if (TSVIONTodoGet (input_vio) > 0) { if (towrite > 0) { TSVIOReenable (data->output_vio); TSContCall (TSVIOContGet (input_vio), TS_EVENT_VCONN_WRITE_READY, input_vio); } } else { The process of passing data through the transformation is illustrated in the following diagram. The downstream vconnections send WRITE_READYevents when they need more data; when data is available, the upstream vconnections reenable the downstream vconnections. In this instance, the TSVIOReenablefunction sends TS_EVENT_IMMEDIATE. Passing Data Through a Transformation {#PassingDataThroughaTransformation} Passing Data Through a Transformation If the handle_transformfunction finds there is no more data to read, then it sets nbytesto ndoneon the output (downstream) VIO and wakes up the output vconnection with a reenable. It then triggers the end of the write operation from the upstream vconnection by sending the upstream vconnection a WRITE_COMPLETEevent. TSVIONBytesSet (data->output_vio, TSVIONDoneGet (input_vio)); TSVIOReenable (data->output_vio); TSContCall (TSVIOContGet (input_vio), TS_EVENT_VCONN_WRITE_COMPLETE, input_vio); } When the upstream vconnection receives the WRITE_COMPLETEevent, it will probably shut down the write operation. Similarly, when the downstream vconnection has consumed all of the data, it sends the transformation a WRITE_COMPLETEevent. The transformation handles this event with a shut down (the transformation shuts down the write operation to the downstream vconnection). See the null_pluginfunction for the following code fragment: case TS_EVENT_VCONN_WRITE_COMPLETE: TSVConnShutdown (TSTransformOutputVConnGet (contp), 0, 1 break; The following diagram illustrates the flow of events: Ending the Transformation {#EndingTransformation} Ending the Transformation
https://docs.trafficserver.apache.org/en/latest/developer-guide/plugins/http-transformations/sample-null-transformation-plugin.en.html
2017-09-19T20:50:22
CC-MAIN-2017-39
1505818686034.31
[array(['../../../_images/vconnection1.jpg', 'Passing Data Through a Transformation'], dtype=object)]
docs.trafficserver.apache.org
Data Structure¶ A PSAS Message has a header with an ASCII four character code (e.g., ‘ADIS’) that identifies it. This is followed by a 6 byte timestamp in nanoseconds. This is always nanosecond since beginning of a program, which is usually the same as boot time for a device. This is followed by two bytes that give the size of the rest of the message. The remainder of the message is a collection of Data.
https://psas-packet-serializer.readthedocs.io/en/latest/packet.html
2017-09-19T20:35:04
CC-MAIN-2017-39
1505818686034.31
[]
psas-packet-serializer.readthedocs.io
{"_id":"59bc03d41d2d8d001a3445":"57bb7e47afc18c0e00529cf-04-20T23:49:42.659Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"[block:callout]\n{\n \"type\": \"warning\",\n \"title\": \"Using Mac or Linux?\",\n \"body\": \"Find instructions at [Quick Start (Mac/Linux)]()\"\n}\n[/block]\n\n[block:callout]\n{\n \"type\": \"info\",\n \"title\": \"Viro Platform requires a key.\",\n \"body\": \"Make sure to sign up [here]() to get your key emailed to you.\"\n}\n[/block]\n# Install Node and the React Native CLI\nGo to the [React Native Getting Started]() guide and follow the steps in the first two sections under **Installing Dependencies**(**Node** and **The React Native CLI**).\n\nNote: you do **not** need Android Studio/Xcode to use the testbed application.\n\n# Create a new React Native project.\nOpen Powershell and navigate to where you want to create the Viro project and run the command \n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"react-native init ViroSample --version=0.47.2\",\n \"language\": \"shell\"\n }\n ]\n}\n[/block]\nThis will create a React Native project in the ViroSample directory.\n\n# Add a Dependency on React Viro\nRun the following commands in Powershell\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"cd ViroSample\\nnpm install -S -E react-viro\",\n \"language\": \"shell\"\n }\n ]\n}\n[/block]\n# Copy Files from React Viro\nCopy the files from `node_modules\\react-viro\\bin\\files\\javascript\\*` to the root of your directory.\n\nThis should override the `index.android.js` and `index.ios.js` files and add `rn-cli.config.js` and a `js/` directory to your ViroSample project.\n\n# Add Your API Key\nModify the `index.android.js` or `index.ios.js` file and add your API key you got when you signed up (or go ahead and sign up on our [website]( for a key)).\n\n# Download/Update the Viro Media App\nInstall the Viro Media app from the app store on your device. The app is free.\n\n**iOS**\n[Viro Media App]()\n\n**Android**\n[Viro Media App]()\n\n# Start Your Packager Server\nIn Powershell, at the root of your new Viro project, run \"npm start\" which should start the React Native packager server.\n\n**Note: Make sure your computer and phone are on the same network**\n\n# Using the Testbed App\n\n1. Open the Viro Media App on your phone\n2. Pull out the left panel and select \"Enter Testbed\"\n3. Find the local IP address of your computer (one way is to open another Powershell window and run \"ipconfig\" and look for the IPv4 Address).\n4. type in your local IP address and hit \"Go\".\n5. You should now be in a 360 degree photo of a beach with the text \"Hello World!\" in front of you. If not, then try shaking the device until a development menu appears and hit \"Reload\" and double-check that the local IP address entered was correct.\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"HelloWorld.png\",\n 2560,\n 1440,\n \"#253c41\"\n ]\n }\n ]\n}\n[/block]\n**Congratulations, you now have Viro set up and running!**\n\n# Next Steps/Other Resources\n1. Want to learn more about the Viro Platform? Check out our [Tutorial](doc:tutorial) where we go through how to modify the Hello World VR Scene.\n2. New to React Native? Check out the React Native [Tutorial]() which goes over some basic concepts of React Native which we leverage.","excerpt":"","slug":"quick-start-windows","type":"basic","title":"Quick Start (Windows)"}
http://docs.viromedia.com/docs/quick-start-windows
2017-09-19T20:31:56
CC-MAIN-2017-39
1505818686034.31
[]
docs.viromedia.com