instruction
stringlengths 27
22.5k
| chosen
stringlengths 27
28.2k
| rejected
stringlengths 19
24.9k
| __index_level_0__
int64 0
10k
|
---|---|---|---|
<p>I'm trying to debug a deadlock in a multi-threaded Python application after it has locked up. Is there a way to attach a debugger to inspect the state of the process? </p>
<p>Edit: I'm attempting this on Linux, but it would be great if there were a cross-platform solution. It's Python after all :)</p>
| <p>Yeah, gdb is good for lower level debugging.</p>
<p>You can change threads with the <em>thread</em> command.</p>
<p>e.g</p>
<pre><code>(gdb) thr 2
[Switching to thread 2 (process 6159 thread 0x3f1b)]
(gdb) backtrace
....
</code></pre>
<p>You could also check out Python specific debuggers like <a href="http://winpdb.org/about/" rel="noreferrer">Winpdb</a>, or <a href="http://bashdb.sourceforge.net/pydb/" rel="noreferrer">pydb</a>. Both platform independent. </p>
| <p>What platform are you attempting this on? Most debuggers allow you to attach to a running process by using the process id.
You can either output the process id via logging or using something like Task Manager.
Once that is achieved it will be possible to inspect individual threads and their call stacks.</p>
<p>EDIT: I don't have any experience with GNU Debugger (GDB), which is cross platform, however I found this <a href="http://wiki.python.org/moin/DebuggingWithGdb" rel="nofollow noreferrer">link</a> and it may start you on the right path. It explains how to add debug symbols (handy for reading stack traces) and how to instruct gdb to attach to a running python process.</p>
| 6,996 |
<p>I'd like to use alternatives to System Center Virtual Machine Manager 2008 is possible, in other words, any FREE tools?</p>
| <p>Before SCVMM, Microsoft's solution was the <a href="http://technet.microsoft.com/en-us/virtualserver/bb676674.aspx" rel="nofollow noreferrer">Virtual Server Migration Toolkit</a>. This requires Windows Server 2003 Automated Deployment Services, which in turn can only be installed on Windows Server 2003 Enterprise Edition. It's about as far from a free tool as you can get. It only works on SP1, not SP2 (unless ADS has been updated since I last checked), and you have to obtain all the patches you've applied to the physical system.</p>
<p>ADS is limited to four partitions per physical disk, because it can't create extended partitions. If your physical system has more than four partitions you have a problem.</p>
<p>Once you do have it running, though, it does actually work.</p>
<p>Many disk copying tools like Ghost or True Image can now produce .vhd files from a physical system.</p>
| <p>Use VMWare its not free, but you can get a decent 30 day trial, which should be enough to do your conversions. VMWare also has other great advantages if you're willing to pay for the product. </p>
| 9,001 |
<p>I would have thought this would be an easy one to Google, but I've been unsucessful.</p>
<p>I want to assign a variable the value out of an attribute (easy so far) then use that variable to select another node based on the value of that attribute.</p>
<p>Example:</p>
<pre><code><xsl:variable name="myId" select="@id" />
<xsl value-of select="//Root/Some/Other/Path/Where[@id='{@myId}']/@Name />
</code></pre>
<p>That does not work. If I replace the {@myId} with the value that is in the variable then it does find the right node, but doign it this way produces nothing. I'm sure I'm missing something, or perhaps there is a different way to do it.</p>
<p>The context is that there is related data under different top-level nodes that share the same id value so I need to get the related nodes in my template.</p>
| <p>Ok, I finally figured it out. Silly problem really, I simply needed to leave out the quotes and the braces. One of those times when I <em>thought</em> that I'd already tried that. :D Oh, and I mistyped @myId in the first example, the code was actually $myId.</p>
<pre><code><xsl:variable name="myId" select="@id" />
<xsl value-of select="//Root/Some/Other/Path/Where[@id=$myId]/@Name" />
</code></pre>
| <p>You seem to have got confused with use of a variable (which is just $variable) and Attribute Value Templates, which allow you to put any XPath expression in some attributes, e.g. </p>
<pre><code><newElement Id="{@Id}"/>
</code></pre>
<p>They can obviously be combined, so you can include a variable in an Attribute Value Template, such as:</p>
<pre><code><newElement Id="{$myId}"/>
</code></pre>
| 2,730 |
<p>I'm having issues with ripples on the first layer of big flat prints. The initial corner of a big flat print is fine, but then ripples begin to form as shown in the screenshot.</p>
<p>I'm just a newbie, so I was thinking they might have something to do with heat or contraction or something. Normally, I use the default and print with no turbofan on the first layer. When I tried adding fan 20% or 50%, nothing much changed (slight differences in the ripple pattern and area, but that pattern varies anyway).</p>
<p>I also wonder if one strip gets bent, then maybe the rest just follow the bends. As far as I know, my heating plate is working fine, has no serious hot spots, and I'm using a high-quality PLA+ filament. I also tried adjusting the print temperature from 205-220 (the range on the box is 205-230). Nothing seemed to help. I am running a default first layer thickness of 0.3 mm because that is supposed to help adhesion (and adhesion is fine).</p>
<p>The ripples look worse than they feel. They feel fairly flat, only slightly rippled, even though they look terrible! (And I don't know what that weird row with blobs is in the top left of the picture. That only happened once; almost like junk was in the nozzle or the feed gears slipped or something).</p>
<p>I'm running a Qidi Xpro machine, Sunlu PLA+ (wonderful) filaments, bed 50 C, print temp 205-215, print speed 30-40 mm/s on the first layer, and first layer thickness 0.3 mm (normal layer thickness is 0.2 mm). This machine has a direct drive with gears immediately above the nozzle.</p>
<p>Does anyone know why this rippling effect occurs, and what I might to do to correct it? Thanks</p>
<p>UPDATE: I'm adding this info here to respond to several comments concerning bed leveling, etc. (Thank you to those who made comments!) </p>
<p>1) I'm sure that the bed is as level as I can make it because I always go through the cycle twice). </p>
<p>2) Regarding clearance, if anything I worry that my clearance is too small since there is a fair amount of drag on my leveling card under the nozzle. So, there is definitely drag on all three level points, about midrange between the lightest drag and the heaviest drag that makes me think I'm filing off part of the nozzle. </p>
<p>3) I do have two nozzles, so I suppose the problem could show up on one but not the other if the nozzles were screwed into the block to give different heights. But the ripple shows up on both nozzles, always in the middle of the build plate, always in the middle of a big flat print. Corners don't usually show ripple effects. I don't want to believe that my build plate dips in the middle on my new machine, either ... :-) Adhesion is fine on small prints in the middle of the plate.</p>
<p>Here is a picture of the bottom of the piece. A careful examination shows an oscillation in the squished filament segments on a filament thread. Almost like the extruder was oscillating vertically in the z-axis at that frequency, or perhaps the filament squishyness was oscillating at that frequency. Looks almost like a weave pattern, since the squished parts alternate position on alternating lines.</p>
<p>It's worth saying again that the piece feels pretty smooth on both the top and bottom sides, even though it looks awful. I don't know what to make of that.</p>
<p><a href="https://i.stack.imgur.com/n1zZA.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/n1zZA.jpg" alt="Ripples on the top side towards the middle of the plate"></a></p>
| <p>The main problem is solved (first layer thickness vs leveled nozzle height).</p>
<p>The following image shows the problem. I was running with a default 0.3 mm first layer (the tooltip setting says a slightly thicker layer helps with adhesion). The build plate was correctly leveled with "midrange" friction on the leveling card at the leveling points.</p>
<p>Problem cause: The midrange leveling height put the nozzle too close to the plate and caused rippling. The first layer thickness was set to 0.3 mm and the thickness of the leveling card was 0.25 mm.</p>
<p>The following image illustrates the problem (and one of the solutions). The bottom right of the image shows rippling. Not knowing what else to do with the "too close" or "too far" or "unlevel" tips in the comments, I just manually lowered the build plate knobs 1/4 turn while the print was in progress. The print began in the lower right. You can see the smooth area where I manually lowered the build plate. Then, to be sure, I raised the plate by restoring the 1/4 turn on the knobs. The rippling returned.</p>
<p><a href="https://i.stack.imgur.com/S4Gdr.jpg" rel="noreferrer" title="Ripples disappear when build plate is lowered 1/4 turn on the knobs"><img src="https://i.stack.imgur.com/S4Gdr.jpg" alt="Ripples disappear when build plate is lowered 1/4 turn on the knobs" title="Ripples disappear when build plate is lowered 1/4 turn on the knobs" /></a></p>
<p>To further explore the 0-90 degree suggestion provided by profesor79, I changed the slicer degree settings to 0-90 degrees and set the first layer thickness to 0.2 mm, which was equivalent to lowering the build plate knobs by 1/4 turn. I kept the same "midrange" friction settings when leveling. The result was a first-layer print with no rippling.</p>
<p><a href="https://i.stack.imgur.com/Eixiu.jpg" rel="noreferrer" title="No ripples with 0-90 degree settings and new build plate distance"><img src="https://i.stack.imgur.com/Eixiu.jpg" alt="No ripples with 0-90 degree settings and new build plate distance" title="No ripples with 0-90 degree settings and new build plate distance" /></a></p>
<h3>Closing Thoughts</h3>
<p>From this experience, I think:</p>
<ol>
<li><p>0.05 mm difference between a thickness of 0.3 mm on the first layer and a leveling-card nozzle height of 0.25 mm makes a rippling difference.</p>
</li>
<li><p>Using mid-range friction vs light friction on the leveling card also makes a difference. You don't need much of a height difference to reach 0.05 mm. Maybe even less is required to cause a ripple.</p>
</li>
<li><p>When printing with a first layer thickness of 0.2 mm, tolerances were tight and I discovered a spot on my build plate that had no adhesion because of a buildup of old adhesive. It left a 1/2-inch hole in the 0.2 mm-thick first layer. I also noticed just a hint of ripple in another place on the build plate, which (I think) indicates a tiny magnetic build plate thickness or warp issue of some kind. Hardly noticeable.</p>
</li>
<li><p>I think I will go forward with a 0.3 mm layer thickness to "absorb" minor flatness inconsistencies in my plate. (I have a glass plate but I have never used it because the magnetic plate is vastly more convenient.) But, to compensate for rippling effects, I will also use a "very light" friction amount when leveling the plate to ensure that the nozzle doesn't get too close to the plate on the first layer.</p>
</li>
<li><p>I found that manually adjusting the build plate height during a solid first layer print was a wonderful way to detect, see, and explore all the relationships between plate leveling, plate flatness, first layer thickness, and friction adjustments on the nozzle. It's very easy to immediately see, understand, and adjust all the related settings to get the best print possible from the machine.</p>
</li>
</ol>
<hr />
<p>Thank you again to everyone who contributed ideas to understanding the problem. It's hard to pick any particular answer because the solution involved multiple ideas, so I have added my own answer to share.</p>
| <ol>
<li><p>The first that I have in mind was connected with an acceleration, so you could play with it (set to half the current value and see the results)</p></li>
<li><p>The other source of that could be drive belt that is fiddling a little bit on the motor and idler shaft (visual check for any play on the motor/shaft)</p></li>
<li><p>Next one could be connected with some obstructions in the filament path (as this is direct drive, the Bowden tube could add an extra load if it was bent or spool is blocked)</p></li>
<li><p>As this is coreXY type printer could you set in slicer filing angles to 0deg and 90deg? That will force booth motors to run in the same time and eliminate a not holding torque on the other motor (or from other hand please check if the other motor gets some play when the head is going diagonally)</p></li>
</ol>
| 1,073 |
<p>I recently got a Creality Ender-3, and tried printing a few things for some tests. I’ve printed a cube and just printed a cylindrical tube today, and I notice each time, it adds this random line on the left and a sort of outline around the actual print. Neither of these were there in my Cura file, but they’re always printed and I’m not sure why?</p>
<p><a href="https://i.stack.imgur.com/SyhGh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SyhGh.jpg" alt="enter image description here"></a></p>
| <p>If the printer is printing, it is instructed to do so by the <a href="https://reprap.org/wiki/G-code" rel="nofollow noreferrer">G-code</a> file unless you are printing through an external software program that has extra G-code to print before your print starts. E.g. in OctoPrint print server it is possible to execute G-code before the print starts.</p>
<h2>Left line = Priming</h2>
<p>The <strong>straight line on the left</strong> is typically <strong>used to prime the printer nozzle to get the filament flow starting</strong>, this is typically seen in <a href="https://github.com/prusa3d/PrusaSlicer" rel="nofollow noreferrer">PrusaSlicer</a> (Prusa's fork of Open Source toolpath generator for 3D printers <a href="https://github.com/slic3r" rel="nofollow noreferrer">Slic3r</a>). <em><strong>This straight line is called priming line, purge line or intro line</strong></em>, and is typically (but not necessarily) printed outside or at the edge of the bed area. Furthermore, a prime line print routine will catch errant nozzle ooze, test extrusion (it is the first indication if the nozzle to bed distance is correct; if not you can abort with minimal material loss) and perform a final wipe action to avoid stringing between the prime line and start of the print. Note that this straight prime line is not a standard option in a <em>custom</em> profile of Cura, so this was part of the Ender-3 Preset you imported or possibly you have copied a starting G-code that includes this prime line.</p>
<p>A typical set of G-code lines to create a prime/purge/intro line is found in your start G-code and could look similar to:</p>
<pre>
G1 Y-3.0 F1000.0 ; go outside print area
G92 E0.0
G1 X60.0 E9.0 F1000.0 ; intro line
G1 X100.0 E12.5 F1000.0 ; intro line
G92 E0.0
</pre>
<p>After slicing your object, you will find such lines in the generated G-code file, but they are not displayed in the preview. Further information can be found in <a href="https://3dprinting.stackexchange.com/questions/6355/writing-g-code-swiping-at-start-of-print">Writing G-code : swiping at start of print</a></p>
<h2>Equidistant line = Skirt</h2>
<p>The <strong>lines at distance from the print</strong> object <strong>is called the "skirt"</strong>, the skirt is an option found under the "Build Plate Adhesion" options in your slicer. The function of the skirt is similar as described for the straight prime line, but it has additional effects that can be wanted. It also shows fairly fast if the bed is unleveled as a whole or if the bed is greasy. Please look into: <a href="/q/20">"What are main differences between rafts, skirts and brims?
"</a>.</p>
<p>Note that it is usually superfluous to use both the prime/purge/intro line and the skirt, both have a similar function. The benefit of the skirt is that you can configure it within the slicer (e.g. length of the printed skirt, height to use as a shield for draft or ooze and distance to product). The downside is, that a skirt limits the useable build area by the distance and width of the skirt.</p>
| <p>These are features, not bugs. </p>
<p>The line off to the left is the "priming line"; the printer is extruding a bead of material to ensure that any oozing is cleaned off of the filament tip, and that the filament is properly pressed into the hotend and flowing consistently from the nozzle, before beginning your print. Notice how plastic didn't start extruding on the "backstroke" until the extruder had almost reached the back of the plate? If you didn't have that priming line, that material would have been missing from your print's first layer.</p>
<p>The ring around your print is the "skirt". The skirt also helps to prime the extruder, and allows you a quick look at your first layer printing behavior before the printer begins printing your actual part. Is your build plate level? Is the nozzle clearance correct? Is the filament adhering well to your bed prep? Are your build plate size and offsets set up properly in the slicer (or are you about to try to print off the edge of your plate)? A skirt can help you determine all these things very quickly, like before the printer starts working over the actual print area, giving you a chance to correct them on-the-fly or at least quickly cancel the print, and it uses a minimum of material to do so compared to a more substantial plate adhesion aid like a brim or raft.</p>
<p>You can disable or alter the behavior of both of these in your slicer software; exactly how depends on the slicer software. </p>
<p>In Ultimaker's Cura, which comes fairly highly recommended for use with Creality printers like the Ender 3, the priming line is part of the pre-print configuration script in the printer settings, and you'll need to have a working knowledge of GCode to mess with that. GCode isn't terribly difficult, but it was designed for computers to generate more than for humans to comprehend, and incorrect GCode commands in this section can cause printer firmware errors, failed prints and can even damage your printer.</p>
<p>The skirt settings are part of the slicer settings for your specific print, under the "Platform Adhesion" section. You can typically choose to print a skirt, a brim (similar concept but it extends all the way in to the outline of the part's first layer, and helps hold the part on the plate while it prints), a raft (a more complex adhesion aid where the printer will lay down several layers of material as a foundation for the actual print), or nothing at all. You can also customize the settings of each of these to create a larger or smaller skirt or brim.</p>
<p>Personally, I keep both of these print features enabled for almost all prints on my Ender 3 Pro; the only time I disable the skirt is either when I want a brim (small parts like <a href="http://www.3dbenchy.com/download/" rel="nofollow noreferrer">3DBenchies</a> print more reliably with a brim) or when the skirt just won't fit on the build plate (which usually means the part has a large enough footprint not to go anywhere during the print), and the only time I mess with the priming line is when I need that area of the build plate for the actual print. These features cost you maybe a penny or two in filament and can save you far more than that in frustration and wasted plastic on failed or imperfect prints.</p>
| 1,504 |
<p>I have got a simple contacts database but I'm having problems with users entering in duplicate data. I have implemented a simple data comparison but unfortunately the duplicated data that is being entered is not exactly the same. For example, names are incorrectly spelled or one person will put in 'Bill Smith' and another will put in 'William Smith' for the same person.</p>
<p>So is there some sort of algorithm that can give a percentage for how similar an entry is to another?</p>
| <p>You can compare the names with the <a href="http://en.wikipedia.org/wiki/Levenshtein_distance" rel="noreferrer">Levenshtein distance</a>. If the names are the same, the distance is 0, else it is given by the minimum number of operations needed to transform one string into the other.</p>
| <p>You might also want to look into probabilistic matching. </p>
| 5,143 |
<p>I haven't been to enough of these "live" events to really determine which, if any, are worth the time / money. Which ones do you attend and why?</p>
| <p>For conventions, if you're still in university, and can make it to Montreal, Canada, the <a href="http://www.cusec.net/" rel="nofollow noreferrer">Canadian Undergraduate Software Engineering Conference</a> (CUSEC) has been extremely enjoyable. See the <a href="http://2009.cusec.net/" rel="nofollow noreferrer">2009</a> site for the next event, and for a take on what previous years have been like, take a look at the <a href="http://2008.cusec.net/en/speakers.php" rel="nofollow noreferrer">2008 speakers</a> (note: it included <a href="https://stackoverflow.com/users/1/jeff-atwood">Jeff Atwood</a>).</p>
<p>I attend CUSEC primarily because our software engineering society on campus makes a point of organizing a trip to it, but also because of the speakers that present there, and the career fair.</p>
| <p>I used to belong to my local Linux User Group which I co-founded but I treated it more as a social event than anything else but obviously a social event full of geeks is still a great way to get a great debate going :)</p>
<p>Conventions and the like I've not got much out of other than being pestered by businesses who can offer me nothing that is apart from a bunch of Linux and Hacker ones where I've met loads of people who I consider friends offline, again great for the social aspect but pretty worthless to me in other respects.</p>
<p>That's not to say I never got any business out of attending various events it's just that treating them as social occasions meant any business that did come my way was a bonus so I never left an event feeling like it was a waste of time.</p>
| 4,604 |
<p>Apologies, I'm a EE designer and software guy. We've been CNC'ing prototypes, and my office just bought a very cheap 3D printer.</p>
<p>I'm using Cura as recomended, and wanted to print a piece that has features on both sides.</p>
<p>Here is a screenshot of each side.</p>
<p>So if you laid one side flat, you see how there is a subtractive portion underneath it?</p>
<p>Is there a way to 3D print an object like this, and keep the details on each side?<a href="https://i.stack.imgur.com/G2T54.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G2T54.jpg" alt="enter image description here"></a></p>
<p><strong>UPDATE</strong>
I copied some Cura settings from guys and basically tipped this thing to a 45 degree. Here are the results. Pretty good! The finish has some zits and pops, but the surface details are quite accurate enough to fit a PCB board in there with confidence.</p>
<p><a href="https://i.stack.imgur.com/4jAWN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4jAWN.jpg" alt="enter image description here"></a></p>
| <p>I haven't tried printing anything like that, but one trick is to print the piece at an angle of 45°, so as to minimise the number of surfaces that are horizontal (or near horizontal.) You will still need some supports, but far less than if you just plonked it flat on the build plate. My only other recommendation would be to use a slicer that allows you to define custom supports, such as Simplify 3D or CraftWare (but still print the piece at an angle).</p>
<p>You might even be able to print it on a raft/brim with no supports. I've seen other 3D'ers pull this off.</p>
| <p>I haven't tried printing anything like that, but one trick is to print the piece at an angle of 45°, so as to minimise the number of surfaces that are horizontal (or near horizontal.) You will still need some supports, but far less than if you just plonked it flat on the build plate. My only other recommendation would be to use a slicer that allows you to define custom supports, such as Simplify 3D or CraftWare (but still print the piece at an angle).</p>
<p>You might even be able to print it on a raft/brim with no supports. I've seen other 3D'ers pull this off.</p>
| 693 |
<p>I have a multi-frame layout. One of the frames contains a form, which I am submitting through XMLHttpRequest. Now when I use document.write() to rewrite the frame with the form, and the new page I am adding contains any javascript then the javascript is not exectuted in IE6?</p>
<p>For example:</p>
<pre><code>document.write("<html><head><script>alert(1);</script></head><body>test</body></html>");
</code></pre>
<p>In the above case the page content is replaced with test but the alert() isn't executed. This works fine in Firefox.</p>
<p>What is a workaround to the above problem? </p>
| <p>Workaround is to programmatically add <code><script></code> blocks to head DOM element in JavaScript at Callback function or call eval() method. It's only way you can make this work in IE 6.</p>
| <p>Another possible alternative is to use JSON, dynamically adding scripts references which will be automatically processed by browser. </p>
<p>Cheers.</p>
| 9,492 |
<p>When I am trying to print something on my home made 3D printer, I can print the first and second layer, then the Bowden extruder's gear starts slipping; it cannot push the filament anymore,the gear is turning but slipping; the filament cannot go forward.</p>
<p>I tried all configurations of the tightening screw of the spring, corrected PID settings, cleaned the nozzle, tried with 200 °C up to 215 °C (I am using PLA) but no result.</p>
<p>I am wondering whether this is related to speed, feedrate and acceleration settings. The Slic3r puts automatically a feedrate as F1800, is this too high ? Do I have to change it every time I slice something? I might proceed with trial and error method but I need a more rational method.</p>
<p>Any suggestions?</p>
<hr>
<p>The slicer I use (Slic3r) puts F1800 as speed. Is this too high ? Could this be a reason for the filament to slip ?</p>
<p>My filament's diameter is 1.75 mm. In the G-code file created by my slicer (Slic3r), the flows are shown as follows:</p>
<pre><code>; external perimeters extrusion width = 0.44mm (4.25mm^3/s)
; perimeters extrusion width = 0.42mm (8.02mm^3/s)
; infill extrusion width = 0.42mm (10.69mm^3/s)
; solid infill extrusion width = 0.42mm (2.67mm^3/s)
; top infill extrusion width = 0.42mm (2.00mm^3/s)
; support material extrusion width = 0.44mm (8.50mm^3/s)
</code></pre>
| <p>The PLA isn't advancing as fast as the gcode requires. Since you've already tried higher temperatures, try printing at half the speed (F value). If that works, try 3/4 of the original F value, etc., until you find the best feed rate at this temperature for your material, printer, and model.</p>
| <p>I had such effects and fixed it by reducing the flow. It might be that your filament is thicker than it should be. Therefore too much filament ends up in the nozzle. Once the molten filament accumulates enough to rise to the cold end of the hot end it solidifies and nothing moves anymore -> clicking.</p>
<p>So either try to go with the Flow from 100% down to 96% or change the Filament width setting of the slicer. Both will have the same result of g-Codes that push less plastic. If you see under extrusion then you overdid it.</p>
| 1,390 |
<p>I am printing a small cylinder, but when the object is finished, it's smaller than the measures I used when create the model.</p>
<p>I used thincerkad to make a simple model, the measures are:</p>
<ul>
<li>width: 90 mm</li>
<li>height: 2 mm</li>
</ul>
<p>After the print was done, the actual dimensions were:</p>
<ul>
<li>width: 70 mm</li>
<li>height: 2 mm</li>
</ul>
<h3>Pictures</h3>
<p>First attempt</p>
<p><a href="https://i.stack.imgur.com/ESch8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ESch8.jpg" alt="one"></a></p>
<p>The smaller object that's in the drawn circle was the first one printed, the dimensions I used were:</p>
<ul>
<li>width: 110 mm</li>
<li>height: 2 mm</li>
</ul>
<p>Then I printed it again, and the result was:</p>
<p><a href="https://i.stack.imgur.com/TAAPI.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TAAPI.jpg" alt="Two"></a></p>
| <p>Let's do the math, you printed something of size 9 cm and got a size of 7 cm. This implies that the scale equals <span class="math-container">$\frac{7}{9}=0.778$</span>. In order to print it at the correct size, you should have printed the object at scale <span class="math-container">$\frac{1}{0.778}=1.286$</span>; so 28.6 % bigger, i.e. <span class="math-container">$1.286\times9=11.6$</span> cm. You printed at 11 cm, so the print should become smaller than the pencil drawn circle on the paper. This is actually what you see in the image you supplied.</p>
<p>This can imply 2 things, you either scale the prints incorrectly to export to stl (but that is unlikely because the Z height is correct), or the steps per mm are incorrectly set in the firmware of your printer. The rotation of the steppers (usually 200 steps) need to be translated into linear movement; this depends on the used pulleys mounted on the steppers (typically used pulleys are: 16 or 20 teeth for belt driven X and Y axes).</p>
<p>Calibrating the steps per mm of the extruder is answered in <a href="/a/6484/">this answer</a>. For the X and Y axis this works the same. If you have a Marlin based printer firmware, send G-code <a href="https://reprap.org/wiki/G-code#M503:_Print_settings" rel="nofollow noreferrer"><code>M503</code></a> to the printer over a terminal interface as e.g. OctoPrint, Pronterface (as part from Printrun: 3D printing host suite), Repetier-Host have, you can obtain the current values from the reply; these are listed under M92.</p>
<p>That value for X and Y needs to be multiplied by 1.286 (as an example) to get the correct dimensions. You do this by sending G-code <a href="https://reprap.org/wiki/G-code#M92:_Set_axis_steps_per_unit" rel="nofollow noreferrer"><code>M92</code></a> like <code>M92 X100.00 Y100.00</code> (see <a href="/a/10318">this answer</a> that explains which values you should use based on pulleys you use, either 80 or 100) to the printer, to keep these values they need to be stored in memory using G-code <a href="https://reprap.org/wiki/G-code#M500:_Store_parameters_in_non-volatile_storage" rel="nofollow noreferrer"><code>M500</code></a> (note that the values 100.00 should be replaced by the values you get by multiplying the return values for X and Y from <code>M503</code> by the 1.286 multiplication factor, only if the error is systematically increasing with print dimensions, otherwise stick to the calculated values from e.g. the Prusa belt calculator). </p>
<p>Without the proper steps per mm, you will not be able to use the full potential of the bed. An alternative as scaling your prints by the appropriate scaling factor will only help if your scaled print is smaller than the bed size divided by that scaling factor, so no use of the full bed. Rather fix the firmware to fit the actual mechanical layout.</p>
| <p>Are you using the stock firmware of your printer? Sounds like to me that you have 16 tooth pulleys and your firmware is set to 20 tooth i.e. 80 steps per mm</p>
<p>The calculation behind the steps per mm is <span class="math-container">$\frac{\text{Steps per Revolution} \times Microsteps}{Teeth \times Pitch}$</span>. The reason for this is that one revolution of the pulley will move the belt the number of teeth times the pitch of the belt. Now take the total number of steps, Steps per Revolution times microsteps, and divide by the distance moved giving the steps per mm.</p>
<p>In <span class="math-container">$\underline{most}$</span> hobby 3D printers you have:</p>
<ul>
<li>1.8 degrees steppers which equals <span class="math-container">$\frac{360}{1.8}=200$</span> steps per revolution , Less common is 0.9 degrees steppers <span class="math-container">$\frac{360}{0.9}=400$</span></li>
<li>GT2 is the most common belts now which have a pitch of 2mm</li>
<li>The two most common pulleys are 16 tooth and 20 tooth, </li>
<li>Depending on what stepper drivers and or configuration you have
<ul>
<li>A4988 <span class="math-container">$\to$</span> 16 microsteps</li>
<li>DRV8825 <span class="math-container">$\to$</span> 32 microsteps</li>
<li>Trinamic <span class="math-container">$\to$</span> 16-256 mircosteps</li>
</ul></li>
</ul>
<p>In your situation I believe you have a 1.8 degree stepper with 16 microsteps, a gt2 belt, and a 16 tooth pulley. Which means your XY steps per mm should be <span class="math-container">$\frac{200 \times 16}{16 \times 2} = 100$</span>. While your firmware is expecting 20 tooth pulleys, yielding <span class="math-container">$\frac{200 \times 16}{20 \times 2} = 80$</span>. This would result in your prints being <span class="math-container">$\frac{100-80}{100} = 20\%$</span> smaller, which explains your results with the circles.</p>
<p>Generalizing, the steppers, microsteps, and pitch don't matter. To go between 16 tooth pulleys to 20 tooth, multiply by <span class="math-container">$0.8=\frac{16}{20}$</span>. From 20 tooth to 16 tooth, multiply by <span class="math-container">$1.25=\frac{20}{16}$</span>.</p>
| 1,364 |
<p>How can I get line numbers to <code>print</code> in <strong>Visual Studio 2005</strong> when printing <code>code</code> listings?</p>
| <p>There is an option in the Print Dialog to do the same (in VS 2005 and 2008 atleast)!</p>
| <p>Isn't there an option in the Print Dialog?</p>
<p>Edit: There is. Go to File => Print, and then in the bottom left there is "Print what" and then "Include line Numbers"</p>
| 2,586 |
<p>In the E3D Kraken cooler block, there is a big, 10 mm grub screw, along the side of the cooler block.</p>
<p>I watched the entire Kraken assembly video:</p>
<p><div class="youtube-embed"><div>
<iframe width="640px" height="395px" src="https://www.youtube.com/embed/wEw4UDUUbIE?start=0"></iframe>
</div></div></p>
<p>There was no mention of this very thick grub screw. The <a href="https://wiki.e3d-online.com/Kraken_Assembly" rel="nofollow noreferrer">E3D Kraken assembly wiki page</a> may refer to this part as the "stainless plug".</p>
<p>Does the depth of the screw inside the Kraken heatbreak affect the effectiveness of the water cooling?</p>
<p>Why was it included in the design at all?</p>
<p>I'm asking because water frequently leaks out of this pore for me, ever since I had to repair some damaged tubing. Additionally I often have to use an extra fan when printing at high temperatures. I'm wondering, before I epoxy this grub screw into place, whether the amount it is tightened into the Kraken has some advantages or disadvantages.</p>
<p>The video shows the part already assembled on the Kraken. This is what the part looks like - it is much larger than the screws used to secure heat throats. <img src="https://static.e3d-online.com/media/catalog/product/cache/b3b166914d87ce343d4dc5ec5117b502/p/l/plugs_10.jpg" alt="enter image description here"></p>
| <p>My problem was 2 things. The <strong>heatbreak</strong>, which was switched out for the MK2 version(Explantation below). And the <strong>Teflon Tube</strong> that runs down the heatsink.</p>
<h2>Heatbreak</h2>
<p>Change the heat-break to a generic E3D one. You can order the heatbreak for the <strong>MK2</strong> from prusa, or any generic heat break for the E3D hot-end assembly.</p>
<p>On the Prusa i3 MK3(s), this component has been given a 45° taper in the middle, between 2.2 and 2 mm. This is done to ease filament retraction for the MMU, and will be nothing but problematic if you are not using the multi-material upgrade. Especially with higher nozzle pressures(eq. with lower layer lines), the filament may be squeezed into this taper, clogging the hot-end.</p>
<p>You may not experience full clogs, but partial ones that will show themselves as streaks in certain layers on the print.</p>
<h2>Heatsink Teflon Tube</h2>
<p>There is a teflon tube that runs down the heatsink. It's crucial that this is mounted correctly and it is not entirely intuative how.</p>
<p>First, press the teflon tube into the heatsink all the way to the bottom. Then, try to pull it out slightly. You'll notice that the small plastic ring at the top of the heatsink will pull out slightly along with it. Now, hold this black plastic part at it's current position with your fingernail, and push the teflon tube in the extra amount. When done, there should be no play in the tube.</p>
| <p>The problem with low layer heights is that the filament flow is low, this means that the filament is at prolonged times at temperature, filament cooking can cause clogging. Try increasing the layer height to 0.2 mm to observe if the same clogging problems still occur, furthermore, try lowering the extrusion temperature.</p>
<p>Also, heat creep is a known issue (insufficient cooling of the cold end) to cause clogging. Note that this is unlikely on the Prusa MK3S, but you could check whether the fan rotates freely and that there are no additional obstructions in the flow path. </p>
<p>It could be that a combination of too much hotend temperature and a low filament flow is causing this issue.</p>
| 1,512 |
<p>I am using Java back end for creating an XML string which is passed to the browser. Currently I am using simple string manipulation to produce this XML. Is it essential that I use some XML library in Java to produce the XML string?
I find the libraries very difficult to use compared to what I need.</p>
| <p>It's not essential, but advisable. However, if string manipulation works for you, then go for it! There are plenty of cases where small or simple XML text can be safely built by hand.</p>
<p>Just be aware that creating XML text is harder than it looks. Here's some criteria I would consider:</p>
<ul>
<li>First: how much <em>control</em> do you have on the information that goes into the xml?</li>
</ul>
<p>The less control you have on the source data, the more likely you will have trouble, and the more advantageous the library becomes. For example: (a) Can you <em>guarantee</em> that the element names will never have a character that is illegal in a name? (b) How about quotes in an attribute's content? Can they happen, and are you handling them? (c) Does the data ever contain anything that might need to be encoded as an <a href="http://en.wikipedia.org/wiki/XML_entity" rel="noreferrer">entity</a> (like the less-than which often needs to be output as <strong>&lt;</strong>); are you doing it correctly?</p>
<ul>
<li>Second, maintainability: is the code that builds the XML easy to understand <em>by someone else</em>?</li>
</ul>
<p>You probably don't want to be stuck with the code for life. I've worked with second-hand C++ code that hand-builds XML and it can be surprisingly obscure. Of course, if this is a personal project of yours, then you don't need to worry about "others": substitute "in a year" for "others" above.</p>
<p>I wouldn't worry about performance. If your XML is simple enough that you can hand-write it, any overhead from the library is probably meaningless. Of course, your case might be different, but you should measure to prove it first.</p>
<p>Finally, Yes; you can hand build XML text by hand if it's simple enough; but not knowing the libraries available is <em>probably</em> not the right reason.</p>
<p>A modern XML library is a quite powerful tool, but it can also be daunting. However, learning the essentials of your XML library is not that hard, and it can be quite handy; among other things, it's almost a requisite in today's job marketplace. Just don't get bogged down by namespaces, schemas and other fancier features until you get the essentials.</p>
<p>Good luck.</p>
| <p>No - especially for generating (parsing I would be less inclined to as input text can always surprise you). I think its fine - but be prepared to shift to a library should you find yourself spending more then a few minutes maintaining your own code. </p>
| 4,450 |
<p>How do you write the syntax for a While loop?</p>
<h2>C<code>#</code></h2>
<pre><code>int i = 0;
while (i != 10)
{
Console.WriteLine(i);
i++;
}
</code></pre>
<h2>VB.Net</h2>
<pre><code>Dim i As Integer = 0
While i <> 10
Console.WriteLine(i)
i += 1
End While
</code></pre>
<h2>PHP</h2>
<pre><code><?php
while(CONDITION)
{
//Do something here.
}
?>
<?php
//MySQL query stuff here
$result = mysql_query($sql, $link) or die("Opps");
while($row = mysql_fetch_assoc($result))
{
$_SESSION['fName'] = $row['fName'];
$_SESSION['lName'] = $row['lName'];
//...
}
?>
</code></pre>
<h2>Python</h2>
<pre><code>i = 0
while i != 10:
print i
i += 1
</code></pre>
| <p>In PHP a while loop will look like this:</p>
<pre><code><?php
while(CONDITION)
{
//Do something here.
}
?>
</code></pre>
<p>A real world example of this might look something like this</p>
<pre><code><?php
//MySQL query stuff here
$result = mysql_query($sql, $link) or die("Opps");
while($row = mysql_fetch_assoc($result))
{
$_SESSION['fName'] = $row['fName'];
$_SESSION['lName'] = $row['lName'];
//...
}
?>
</code></pre>
| <h2>TCL</h2>
<pre><code>set i 0
while {$i != 10} {
puts $i
incr i
}
</code></pre>
<p>C++, C, JavaScript, Java and a myriad of other C-like languages all look exactly the same as C#, except in the way that they write the output to the console, or possibly the way you create the variable <code>i</code>. Answering that would belong in some other question.</p>
| 6,384 |
<p><a href="https://3dprinting.stackexchange.com/questions/413/methods-for-smoothing-3d-objects">Methods for smoothing 3D objects</a></p>
<p>So... maybe one day it becomes community wiki. I think that we need this type of questions, so we can provide more detailed answers to methods and practices.</p>
| <p><em>"is still too broad?"</em> is answered by your own question, <em>"we can provide more detailed answers"</em>. </p>
<p>The problem essentially is that this Q&A format isn't suited to very long treatises on this type of subject. It's far better to ask a specific question such as <em>"How do I smooth this ABS print to eliminate all signs that it was 3D printed?"</em> and receive several good answers, than to have a one-stop-wiki question that attempts (and usually does very poorly) at holding all the answers to all the possible smoothing questions.</p>
<p>So I'd recommend we leave this question closed and let people start more specific questions as they run into actual problems.</p>
| <p><em>"is still too broad?"</em> is answered by your own question, <em>"we can provide more detailed answers"</em>. </p>
<p>The problem essentially is that this Q&A format isn't suited to very long treatises on this type of subject. It's far better to ask a specific question such as <em>"How do I smooth this ABS print to eliminate all signs that it was 3D printed?"</em> and receive several good answers, than to have a one-stop-wiki question that attempts (and usually does very poorly) at holding all the answers to all the possible smoothing questions.</p>
<p>So I'd recommend we leave this question closed and let people start more specific questions as they run into actual problems.</p>
| 20 |
<p>C#: What is a good Regex to parse hyperlinks and their description?</p>
<p>Please consider case insensitivity, white-space and use of single quotes (instead of double quotes) around the HREF tag.</p>
<p>Please also consider obtaining hyperlinks which have other tags within the <code><a></code> tags such as <code><b></code> and <code><i></code>.
</p>
| <p>As long as there are no nested tags (and no line breaks), the following variant works well:</p>
<pre><code><a\s+href=(?:"([^"]+)"|'([^']+)').*?>(.*?)</a>
</code></pre>
<p>As soon as nested tags come into play, regular expressions are unfit for parsing. However, you can still use them by applying more advanced features of modern interpreters (depending on your regex machine). E.g. .NET regular expressions use a stack; I found this:</p>
<pre><code>(?:<a.*?href=[""'](?<url>.*?)[""'].*?>)(?<name>(?><a[^<]*>(?<DEPTH>)|</a>(?<-DEPTH>)|.)+)(?(DEPTH)(?!))(?:</a>)
</code></pre>
<p>Source: <a href="http://weblogs.asp.net/scottcate/archive/2004/12/13/281955.aspx" rel="noreferrer">http://weblogs.asp.net/scottcate/archive/2004/12/13/281955.aspx</a></p>
| <p>Just going to throw this snippet out there now that I have it working..this is a less greedy version of one suggested earlier. The original wouldnt work if the input had multiple hyperlinks. This code below will allow you to loop through all the hyperlinks:</p>
<pre><code>static Regex rHref = new Regex(@"<a.*?href=[""'](?<url>[^""^']+[.]*?)[""'].*?>(?<keywords>[^<]+[.]*?)</a>", RegexOptions.IgnoreCase | RegexOptions.Compiled);
public void ParseHyperlinks(string html)
{
MatchCollection mcHref = rHref.Matches(html);
foreach (Match m in mcHref)
AddKeywordLink(m.Groups["keywords"].Value, m.Groups["url"].Value);
}
</code></pre>
| 4,589 |
<p>Sparked by <a href="https://3dprinting.stackexchange.com/questions/1245/running-12v-on-a-24v-heater-cartridge">this question</a>, I wanted to discuss the most efficient and also the easiest ways of thermally insulating the heat block of the hotend.</p>
<p>I have seen <a href="http://numbersixreprap.blogspot.fr/2013/10/does-insulating-heater-block-make.html" rel="noreferrer">Kapton tape insulations as done here</a> with a very conclusive resumee about its usefulness.</p>
<p>In the links of the named article, <a href="http://bukobot.com/hot-end-thermal-management" rel="noreferrer">a method with insulating material from a heatbed is described</a>, however without giving quantitative results.</p>
<p>Additionally, I know that in the guys over in the german reprap forum produce their own <a href="http://forums.reprap.org/read.php?252,584458" rel="noreferrer">silicon covers for the heater block</a>. As I understand there is a large spread between people's reports, from 'almost negligible' as insulator (but helpful for other things) to very useful. No quantification, though. Also, these seem to come with a certain amount of effort to produce.</p>
<p>Are there additional solutions and/or comparisons between solutions?</p>
| <p>The "quick and dirty" approach is to just slap a bunch of Kapton tape on there. The more the better! (Until you need to dismantle for maintenance, anyway.)</p>
<p>I find pre-cut ceramic tape + kapton tape "blankets" to be easy and effective. E3Dv6 and Replicator 1/2 style hot blocks should be compatible. Or you can cut your own using a sharp hobby knife. </p>
<p><a href="https://i.stack.imgur.com/DFZDq.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/DFZDq.jpg" alt="enter image description here"></a>
<a href="http://www.fargo3dprinting.com/products/makerbot-replicator-2-ceramic-insulation-tape/" rel="noreferrer">http://www.fargo3dprinting.com/products/makerbot-replicator-2-ceramic-insulation-tape/</a></p>
<p>The main downside is that they don't insulate two sides of the hot block. But covering the top and bottom provides much of the practical benefit, and you can always add a few more wraps of Kapton tape to cover up the rest of the surfaces. </p>
<p>Another good option that has recently started to become popular is fiberglass heat shield tape. It has a silicone adhesive, woven fiberglass mat, and shiny aluminum surface. (The reflective surface reduces heat radiation.) It's often used in automotive applications around mufflers and the like. You can cut it up into little rectangles for each side of the hot block, or wrap the block similar to Kapton. </p>
<p><a href="https://i.stack.imgur.com/DEiZE.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/DEiZE.jpg" alt="enter image description here"></a>
<a href="https://shop.raffle.ch/shop/insulation_material/" rel="noreferrer">https://shop.raffle.ch/shop/insulation_material/</a></p>
<p>Main issue is quality -- not all brands have adhesive that will hold up to high temp printing. It may smell when initially "burned in" due to the adhesive cooking a bit. I also find that you need a couple layers to get as much insulation as the ceramic+kapton blanket when there's a lot of airflow around the hot block. </p>
| <p>After having seen <a href="https://3dprinting.stackexchange.com/questions/4026/how-can-i-insulate-my-thermistor/4035#4035">this answer</a> to this question, <a href="https://3dprinting.stackexchange.com/questions/4026/how-can-i-insulate-my-thermistor">How can I insulate my thermistor?</a>, I ordered these, from eBay, <a href="http://www.ebay.co.uk/itm/5PCS-3mm-Thick-3D-Printer-Heating-Block-Cotton-Hotend-Nozzle-Heat-Insulation-EW-/282484985258" rel="nofollow noreferrer">5PCS 3mm Thick 3D Printer Heating Block Cotton Hotend Nozzle Heat Insulation EW</a>, for around £0.40</p>
<p><a href="https://i.stack.imgur.com/6f1tY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6f1tY.png" alt="enter image description here"></a></p>
<p>Blurb from the item's description</p>
<blockquote>
<ul>
<li>Thickness: 3mm </li>
<li><p>Dimension : 75mm*21*3mm +/-0.2mm </p></li>
<li><p>The heat insulation cotton is used for 3D printer nozzle keeping warm;</p></li>
<li>The heating insulation cotton is made from heat-resistant ceramic fiber;</li>
<li>The product sizes can be customized according to customer needs;</li>
<li>The benefit for keeping the key parts of the 3D printer heating aluminum block warm;</li>
<li>Is making the internal temperature flat, saving power,and energy;</li>
<li>This High temperature resistant cotton can work for a long time in high temperature of 900 degree.</li>
</ul>
</blockquote>
<p>Other images:</p>
<p><a href="https://i.stack.imgur.com/8Y9KA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Y9KA.png" alt="enter image description here"></a><a href="https://i.stack.imgur.com/VLyKX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VLyKX.png" alt="enter image description here"></a><a href="https://i.stack.imgur.com/nkRUR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nkRUR.png" alt="enter image description here"></a></p>
<p>Granted, these look suspiciously like the strips in <a href="https://3dprinting.stackexchange.com/questions/1247/efficient-and-easy-way-to-thermally-insulate-the-heat-block-of-the-hotend#answer-1249">Ryan's answer</a>, but use cotton, en lieu of ceramic tape.</p>
| 301 |
<p>I am building a Prusa i3 MK2S (<a href="https://toms3d.org/2017/02/23/building-cheapest-possible-prusa-i3-mk2/" rel="nofollow noreferrer">Dolly</a>). I find it very difficult to find M5 rods for the Z axis, I have an M8 lead screw with 8 mm lead, So I thought since M8 is widely available online I could just use them instead of M5. What do you think about it? What should I keep in mind?</p>
| <p>as long as you match the parts that is ok.
The m8 rod will give you more stiffness and will be harder to bend.
As the result, you will have to calibrate the steps/mm settings in the firmware.</p>
| <p>You can use the screws and rods you can find easily, of course some dimensions will change and is needed to adjust parameters. </p>
<p>I used in my first 3D printer and standard screw of 9.5mm (3/8) the main problem was the coupling available in Mexico that werer none on that time.</p>
<p>You can buy the bearings on Ali Express, they can send from there to Mexico, I guess they can deliver on your country. </p>
<p><strong>The bearings has these sizes:</strong><br>
where is Number ID, inner diameter, outter diameter, width.</p>
<pre><code>608ZZ 8x22x7
607ZZ 7x19x6
6000 10x26x8
</code></pre>
<p>Don't try to use 6000 :D is a little complicated to reduce the size for a proper coupling if you don't have a lathe. My 3rd Printer will use this kind of thread because is easily to find with no warpage and now I will use a lathe to reduce the coupling area to 8mm or 5mm since I have on hand the couplers.</p>
| 839 |
<p>I really like <a href="http://www.devexpress.com/" rel="nofollow noreferrer">DevX</a> components, but they are pretty expensive, maybe anyone knows free equivalents ? or web site where I can look for some kind of free visual component for .NET</p>
| <p>Check out free <a href="http://www.componentfactory.com/products.php#toolkit" rel="noreferrer">Krypton Toolkit</a> of Component Factory.</p>
| <p>I second that. Krypton all the way. Some of their controls actually outperform the same Telerik control, too.</p>
| 8,437 |
<p>I'm not a SQL expert, and I'm reminded of the fact every time I need to do something beyond the basics. I have a test database that is not large in size, but the transaction log definitely is. How do I clear out the transaction log?</p>
| <p>Making a log file smaller should really be reserved for scenarios where it encountered unexpected growth which you do not expect to happen again. If the log file will grow to the same size again, not very much is accomplished by shrinking it temporarily. Now, depending on the recovery goals of your database, these are the actions you should take.</p>
<h1>First, take a full backup</h1>
<p>Never make any changes to your database without ensuring you can restore it should something go wrong.</p>
<h1>If you care about point-in-time recovery</h1>
<p>(And by point-in-time recovery, I mean you care about being able to restore to anything other than a full or differential backup.)</p>
<p>Presumably your database is in <code>FULL</code> recovery mode. If not, then make sure it is:</p>
<pre><code>ALTER DATABASE testdb SET RECOVERY FULL;
</code></pre>
<p>Even if you are taking regular full backups, the log file will grow and grow until you perform a <em>log</em> backup - this is for your protection, not to needlessly eat away at your disk space. You should be performing these log backups quite frequently, according to your recovery objectives. For example, if you have a business rule that states you can afford to lose no more than 15 minutes of data in the event of a disaster, you should have a job that backs up the log every 15 minutes. Here is a script that will generate timestamped file names based on the current time (but you can also do this with maintenance plans etc., just don't choose any of the shrink options in maintenance plans, they're awful).</p>
<pre><code>DECLARE @path NVARCHAR(255) = N'\\backup_share\log\testdb_'
+ CONVERT(CHAR(8), GETDATE(), 112) + '_'
+ REPLACE(CONVERT(CHAR(8), GETDATE(), 108),':','')
+ '.trn';
BACKUP LOG foo TO DISK = @path WITH INIT, COMPRESSION;
</code></pre>
<p>Note that <code>\\backup_share\</code> should be on a different machine that represents a different underlying storage device. Backing these up to the same machine (or to a different machine that uses the same underlying disks, or a different VM that's on the same physical host) does not really help you, since if the machine blows up, you've lost your database <em>and</em> its backups. Depending on your network infrastructure it may make more sense to backup locally and then transfer them to a different location behind the scenes; in either case, you want to get them off the primary database machine as quickly as possible.</p>
<p>Now, once you have regular log backups running, it should be reasonable to shrink the log file to something more reasonable than whatever it's blown up to now. This does <em>not</em> mean running <code>SHRINKFILE</code> over and over again until the log file is 1 MB - even if you are backing up the log frequently, it still needs to accommodate the sum of any concurrent transactions that can occur. Log file autogrow events are expensive, since SQL Server has to zero out the files (unlike data files when instant file initialization is enabled), and user transactions have to wait while this happens. You want to do this grow-shrink-grow-shrink routine as little as possible, and you certainly don't want to make your users pay for it.</p>
<p>Note that you may need to back up the log twice before a shrink is possible (thanks Robert).</p>
<p>So, you need to come up with a practical size for your log file. Nobody here can tell you what that is without knowing a lot more about your system, but if you've been frequently shrinking the log file and it has been growing again, a good watermark is probably 10-50% higher than the largest it's been. Let's say that comes to 200 MB, and you want any subsequent autogrowth events to be 50 MB, then you can adjust the log file size this way:</p>
<pre><code>USE [master];
GO
ALTER DATABASE Test1
MODIFY FILE
(NAME = yourdb_log, SIZE = 200MB, FILEGROWTH = 50MB);
GO
</code></pre>
<p>Note that if the log file is currently > 200 MB, you may need to run this first:</p>
<pre><code>USE yourdb;
GO
DBCC SHRINKFILE(yourdb_log, 200);
GO
</code></pre>
<h1>If you don't care about point-in-time recovery</h1>
<p>If this is a test database, and you don't care about point-in-time recovery, then you should make sure that your database is in <code>SIMPLE</code> recovery mode.</p>
<pre><code>ALTER DATABASE testdb SET RECOVERY SIMPLE;
</code></pre>
<p>Putting the database in <code>SIMPLE</code> recovery mode will make sure that SQL Server re-uses portions of the log file (essentially phasing out inactive transactions) instead of growing to keep a record of <em>all</em> transactions (like <code>FULL</code> recovery does until you back up the log). <code>CHECKPOINT</code> events will help control the log and make sure that it doesn't need to grow unless you generate a lot of t-log activity between <code>CHECKPOINT</code>s.</p>
<p>Next, you should make absolute sure that this log growth was truly due to an abnormal event (say, an annual spring cleaning or rebuilding your biggest indexes), and not due to normal, everyday usage. If you shrink the log file to a ridiculously small size, and SQL Server just has to grow it again to accommodate your normal activity, what did you gain? Were you able to make use of that disk space you freed up only temporarily? If you need an immediate fix, then you can run the following:</p>
<pre><code>USE yourdb;
GO
CHECKPOINT;
GO
CHECKPOINT; -- run twice to ensure file wrap-around
GO
DBCC SHRINKFILE(yourdb_log, 200); -- unit is set in MBs
GO
</code></pre>
<p>Otherwise, set an appropriate size and growth rate. As per the example in the point-in-time recovery case, you can use the same code and logic to determine what file size is appropriate and set reasonable autogrowth parameters. </p>
<h1>Some things you don't want to do</h1>
<ul>
<li><p><strong>Back up the log with <code>TRUNCATE_ONLY</code> option and then <code>SHRINKFILE</code></strong>. For one, this <code>TRUNCATE_ONLY</code> option has been deprecated and is no longer available in current versions of SQL Server. Second, if you are in <code>FULL</code> recovery model, this will destroy your log chain and require a new, full backup.</p></li>
<li><p><strong>Detach the database, delete the log file, and re-attach</strong>. I can't emphasize how dangerous this can be. Your database may not come back up, it may come up as suspect, you may have to revert to a backup (if you have one), etc. etc.</p></li>
<li><p><strong>Use the "shrink database" option</strong>. <code>DBCC SHRINKDATABASE</code> and the maintenance plan option to do the same are bad ideas, especially if you really only need to resolve a log problem issue. Target the file you want to adjust and adjust it independently, using <code>DBCC SHRINKFILE</code> or <code>ALTER DATABASE ... MODIFY FILE</code> (examples above).</p></li>
<li><p><strong>Shrink the log file to 1 MB</strong>. This looks tempting because, hey, SQL Server will let me do it in certain scenarios, and look at all the space it frees! Unless your database is read only (and it is, you should mark it as such using <code>ALTER DATABASE</code>), this will absolutely just lead to many unnecessary growth events, as the log has to accommodate current transactions regardless of the recovery model. What is the point of freeing up that space temporarily, just so SQL Server can take it back slowly and painfully?</p></li>
<li><p><strong>Create a second log file</strong>. This will provide temporarily relief for the drive that has filled your disk, but this is like trying to fix a punctured lung with a band-aid. You should deal with the problematic log file directly instead of just adding another potential problem. Other than redirecting some transaction log activity to a different drive, a second log file really does nothing for you (unlike a second data file), since only one of the files can ever be used at a time. <a href="http://www.sqlskills.com/blogs/paul/multiple-log-files-and-why-theyre-bad/" rel="noreferrer">Paul Randal also explains why multiple log files can bite you later</a>.</p></li>
</ul>
<h1>Be proactive</h1>
<p>Instead of shrinking your log file to some small amount and letting it constantly autogrow at a small rate on its own, set it to some reasonably large size (one that will accommodate the sum of your largest set of concurrent transactions) and set a reasonable autogrow setting as a fallback, so that it doesn't have to grow multiple times to satisfy single transactions and so that it will be relatively rare for it to ever have to grow during normal business operations.</p>
<p>The worst possible settings here are 1 MB growth or 10% growth. Funny enough, these are the defaults for SQL Server (which I've complained about and <a href="https://web.archive.org/web/20140108204835/http://connect.microsoft.com:80/SQLServer/feedback/details/415343" rel="noreferrer">asked for changes to no avail</a>) - 1 MB for data files, and 10% for log files. The former is much too small in this day and age, and the latter leads to longer and longer events every time (say, your log file is 500 MB, first growth is 50 MB, next growth is 55 MB, next growth is 60.5 MB, etc. etc. - and on slow I/O, believe me, you will really notice this curve).</p>
<h1>Further reading</h1>
<p>Please don't stop here; while much of the advice you see out there about shrinking log files is inherently bad and even potentially disastrous, there are some people who care more about data integrity than freeing up disk space.</p>
<p><a href="https://sqlblog.org/2009/07/27/oh-the-horror-please-stop-telling-people-they-should-shrink-their-log-files" rel="noreferrer">A blog post I wrote in 2009, when I saw a few "here's how to shrink the log file" posts spring up</a>.</p>
<p><a href="http://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/" rel="noreferrer">A blog post Brent Ozar wrote four years ago, pointing to multiple resources, in response to a SQL Server Magazine article that should <em>not</em> have been published</a>.</p>
<p><a href="http://www.sqlskills.com/blogs/paul/importance-of-proper-transaction-log-size-management/" rel="noreferrer">A blog post by Paul Randal explaining why t-log maintenance is important</a> and <a href="http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/" rel="noreferrer">why you shouldn't shrink your data files, either</a>.</p>
<p><a href="https://dba.stackexchange.com/questions/29829/why-does-the-transaction-log-keep-growing-or-run-out-of-space">Mike Walsh has a great answer covering some of these aspects too, including reasons why you might not be able to shrink your log file immediately</a>.</p>
| <p>DB Transaction Log <strong>Shrink to min size</strong>:</p>
<ol>
<li>Backup: Transaction log</li>
<li>Shrink files: Transaction log</li>
<li>Backup: Transaction log</li>
<li>Shrink files: Transaction log</li>
</ol>
<p>I made tests on several number of DBs: <strong>this sequence works</strong>. </p>
<p>It usually <strong>shrinks to 2MB</strong>.</p>
<p>OR by a script:</p>
<pre><code>DECLARE @DB_Name nvarchar(255);
DECLARE @DB_LogFileName nvarchar(255);
SET @DB_Name = '<Database Name>'; --Input Variable
SET @DB_LogFileName = '<LogFileEntryName>'; --Input Variable
EXEC
(
'USE ['+@DB_Name+']; '+
'BACKUP LOG ['+@DB_Name+'] WITH TRUNCATE_ONLY ' +
'DBCC SHRINKFILE( '''+@DB_LogFileName+''', 2) ' +
'BACKUP LOG ['+@DB_Name+'] WITH TRUNCATE_ONLY ' +
'DBCC SHRINKFILE( '''+@DB_LogFileName+''', 2)'
)
GO
</code></pre>
| 8,058 |
<p>When running my application I sometimes get an error about <code>too many files open</code>.</p>
<p>Running <code>ulimit -a</code> reports that the limit is 1024. How do I increase the limit above 1024? </p>
<p><strong>Edit</strong>
<code>ulimit -n 2048</code> results in a permission error.</p>
| <p>You could always try doing a <code>ulimit -n 2048</code>. This will only reset the limit for your current shell and the number you specify must not exceed the hard limit</p>
<p>Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.</p>
<pre><code>set rlim_fd_max = 166384
set rlim_fd_cur = 8192
</code></pre>
<p>On OS X, this same data must be set in /etc/sysctl.conf.</p>
<pre><code>kern.maxfilesperproc=166384
kern.maxfiles=8192
</code></pre>
<p>Under Linux, these settings are often in /etc/security/limits.conf.</p>
<p>There are two kinds of limits:</p>
<ul>
<li><strong>soft</strong> limits are simply the currently enforced limits</li>
<li><strong>hard</strong> limits mark the maximum value which cannot be exceeded by setting a soft limit</li>
</ul>
<p>Soft limits could be set by any user while hard limits are changeable only by root.
Limits are a property of a process. They are inherited when a child process is created so system-wide limits should be set during the system initialization in init scripts and user limits should be set during user login for example by using pam_limits.</p>
<p>There are often defaults set when the machine boots. So, even though you may reset your ulimit in an individual shell, you may find that it resets back to the previous value on reboot. You may want to grep your boot scripts for the existence ulimit commands if you want to change the default. </p>
| <p>If some of your services are balking into ulimits, it's sometimes easier to put appropriate commands into service's init-script. For example, when Apache is reporting</p>
<blockquote>
<p>[alert] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread</p>
</blockquote>
<p>Try to put <code>ulimit -s unlimited</code> into <code>/etc/init.d/httpd</code>. This does not require a server reboot.</p>
| 5,490 |
<p>In XLST how would you find out the length of a node-set?</p>
| <pre><code><xsl:variable name="length" select="count(nodeset)"/>
</code></pre>
| <p>Generally in XSLT things aren't referred to as <code>Arrays</code>, since there is really no such thing in XSLT. The technical term is either <code>nodesets</code> (made up of zero or more nodes) or in XSLT 2.0 <code>sequences</code>.</p>
| 5,203 |
<p>So while doing some research I stumbled upon a <a href="https://reprap.org/wiki/Glass_Nozzles#Step_1_Assemble_Parts_and_Tools" rel="nofollow noreferrer">wiki page on reprap</a> from a few years back where the user was creating a glass nozzle to replace the brass and PTFE assembly.<a href="https://reprap.org/wiki/Glass_Nozzles#Step_1_Assemble_Parts_and_Tools" rel="nofollow noreferrer">1</a></p>
<p>Does anyone know the theory behind this? Glass is a great insulator so I could see how that would be beneficial for the heat break part but I can't see how it is appropriate for the nozzle as this is normally brass which is a good conductor.</p>
<p>Surely the glass takes much more energy to heat up?</p>
<p>On a side note I've seen similar projects using ceramic instead.</p>
| <p>First off, this is not a glass nozzle, it is a whole hotend design. A super simplistic one.</p>
<p>Glass is, like ceramics, not a good thermal conductor but has a quite good thermal resistance - it only melts at about 1600 °C, which means you will never have to fight melting or warping of the filament path itself at all - the heater copper wire will melt at about 1084 °C, so way before the glass, and most plastics that are printable start to decompose at less than 400°C.</p>
<p>Construction-wise, this design has some benefits:</p>
<ul>
<li>Due to the design and material properties, this hotend doesn't need cooling fins and a "coldend" is not needed at all.</li>
<li>The whole hotend being one solid piece makes it pretty much a "plug and play" item and prevents leaks.</li>
<li>Glass is extremely abrasive resistant. This means a glass nozzle could be used for stuff like carbon fiber filament very long.</li>
<li>Glass can be molten, repaired and modified with fairly simple equipment, e.g. a burner and some skill.</li>
<li>Glass could be easily cleaned up to medical and food-grade machine ratings. The simplicity of the hotend assembly could make it autoclavable as a whole piece.</li>
</ul>
<p>It has some downsides though:</p>
<ul>
<li>Glass is brittle and does not take lateral forces and sharp impacts kindly. In other words: Handle with extreme care.</li>
<li>Due to the glass being an insulator, the inside of the hotend will have a lower temperature than the outside.
<ul>
<li>A fairly thin-walled meltzone could mitigate this problem to some degree at the downside of making it even more prone to breaking</li>
<li>The insulating behavior means, that the meltzone has to directly feed into the nozzle with as little unheated area as possible to prevent the molten plastic from solidifying inside the nozzle again.</li>
</ul></li>
<li>The skill needed to create a properly sized nozzle from glass is tremendous.</li>
</ul>
| <p>One shortcoming would be that when it comes back to lay down a new line next to an existing line, I would think that it would need to be able to melt the previously printed plastic, especially any bumps and strings.<br>
High thermal conductivity for good heat flow seems important.</p>
| 1,503 |
<p>I have the vmware server with this error, anyone knows how to fix it?<a href="http://soporte.cardinalsystems.com.ar/errorvmwareserver.jpg" rel="nofollow noreferrer">VMware Server Error http://soporte.cardinalsystems.com.ar/errorvmwareserver.jpg</a></p>
| <p>In the Network Connections on the host PC, you might try repairing the connections that are created by VMWare. Something like "VMWare Network Adapter VMnet1"</p>
<p>I'm assuming that the network connections (to a LAN/Internet) are working on the host computer. If not, I'd start by fixing the host first.</p>
| <p>There should be a vmware.log file or something similar in the directory that contains your vm. After you start the vm, are there any new errors in it? </p>
<p>Also, is the network adapter enabled?</p>
| 6,604 |
<p>I'm using the <a href="http://msdn.microsoft.com/en-us/library/ms178329.aspx" rel="nofollow noreferrer">ASP.NET Login Controls</a> and <a href="http://msdn.microsoft.com/en-us/library/aa480476.aspx" rel="nofollow noreferrer">Forms Authentication</a> for membership/credentials for an ASP.NET web application. And I'm using a <a href="http://msdn.microsoft.com/en-us/library/yy2ykkab.aspx" rel="nofollow noreferrer">site map</a> for site navigation.</p>
<p>I have ASP.NET TreeView and Menu navigation controls populated using a SiteMapDataSource. But off-limits administrator-only pages are visible to non-administrator users.</p>
<hr>
<blockquote>
<p><strong><a href="https://stackoverflow.com/users/1574/kevin-pang">Kevin Pang</a></strong> wrote:</p>
<p>I'm not sure how this question is any
different than your <a href="https://stackoverflow.com/questions/33263/how-do-i-best-handle-role-based-permissions-using-forms-authentication-on-my-as">other question</a>…</p>
</blockquote>
<p>The other question deals with assigning and maintaining permissions.</p>
<p>This question just deals with presentation of navigation. Specifically TreeView and Menu controls with sitemap data sources.</p>
<pre><code><asp:Menu ID="Menu1" runat="server" DataSourceID="SiteMapDataSource1" />
<asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" ShowStartingNode="False" />
</code></pre>
<hr>
<blockquote>
<p><strong><a href="https://stackoverflow.com/users/2808/nicholas">Nicholas</a></strong> wrote:</p>
<p>add role="SomeRole" in the sitemap</p>
</blockquote>
<p>Does that only handle the display issue? Or are such page permissions enforced?</p>
| <p>You pretty much need to keep the same data context available throughout the lifetime of the operations you want to perform if you're ever going to be storing changes which are to be <code>.SubmitChanges()</code>'d later, as otherwise you will lose those changes.</p>
<p>If you're just querying stuff then it's fine to create them as needed, but then if later you want to <code>.SubmitChanges()</code> you'll have to refactor your code a lot, so you may as well adopt the pattern of effectively keeping the <code>datacontext</code> global throughout your app from the beginning.</p>
<p>Note the data context is <em>disconnected</em>. The connection is only made when the query data is <em>enumerated</em> (not when you first run the query, it's a 'lazy' data type so only provides data when it's needed), and then closed immediately afterwards. On <code>.SubmitChanges()</code> the connection is opened to submit the changes then closed immediately afterwards. So don't think keeping the <code>datacontext</code> around keeps a connection open, it doesn't (you can hook the <code>StateChange</code> event of the connection to confirm this for yourself, that's how I'm sure).</p>
<p>There is a great article over at <a href="http://www.west-wind.com/weblog/posts/246222.aspx" rel="nofollow noreferrer">Rick Strahl's Blog</a> which covers this topic in depth, far more than my answer here provides!!</p>
| <p>I think Jeff Atwood talked about this in the <a href="http://herdingcode.com/?p=36" rel="nofollow noreferrer">Herding Code podcast</a>, when he was questioned about the exact same thing. Listen to it towards the last 15-20 minutes or so.</p>
<p>I think in SO, the datacontext is created in the Controller class. Not sure about a lot of details here. But that's what it looked like.</p>
| 5,343 |
<p>I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account.</p>
<p>I tested it from the command line like:</p>
<pre><code>wget -O - --no-check-certificate \
"https://seconduser:[email protected]/v1/posts/add?url=http://seet.dk&description=test"
</code></pre>
<p>This works great.</p>
<p>I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with:</p>
<pre><code>javascript:void(
open('https://seconduser:[email protected]/v1/posts/add?url='
+encodeURIComponent(location.href)
+'&description='+encodeURIComponent(document.title),
'delicious','toolbar=no,width=500,height=250'
)
);
</code></pre>
<p>But all that happens is that I get this from del.icio.us:</p>
<pre><code><?xml version="1.0" standalone="yes"?>
<result code="access denied" />
<!-- fe04.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:02:54 PDT 2008 -->
</code></pre>
<p>If I then go to the address bar and press enter, it changes to:</p>
<pre><code><?xml version='1.0' standalone='yes'?>
<result code="done" />
<!-- fe02.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:07:45 PDT 2008 -->
</code></pre>
<p>Any ideas how to get it to work directly from the bookmarks?</p>
| <p>Can you sniff the traffic to find what's actually being sent? Is it sending any auth data at all and it's incorrect or being presented in a form the server doesn't like, or is it never being sent by firefox at all?</p>
| <p>Does calling the method twice work?</p>
<p>Seems to me that your authentication is being approved after the content arrives, so then a second attempt now works because you have the correct cookies.</p>
| 2,654 |
<p>I understand that this is probably more of an electronics question, but was hoping that someone with experience of using an Anet A6 in the UK (or a country outside of the US/China) may be able to help... or alternatively, someone knowledgeable in electronics!</p>
<p>I recently bought and assembled an Anet A6. I am based in the UK. On the power supply transformer of the Anet A6, there is a switch that allows you to select the input voltage from the mains. There are two options, 100 V or 220 V.</p>
<p>When I turn my Anet A6 printer on, nothing happens... I have triple checked all connections and there doesn't seem to be anything wrongly connected or loose.</p>
<p>I am wondering if the reason it is not working is because in the UK we use a different mains voltage 230 V (I think) and a different frequency 50 Hz (I think) to the US and China (which I assume the printer was built to accommodate)?... I am not 100 % sure on this, just a guess, I am far from an electronics expert.</p>
<p>I don't have a multimeter to test if there is voltage flowing (not that I would even know how to test it lol).
Is it likely that this difference between voltage/freq is the reason that it is not working? If so, is there anyway to fix this? I would prefer to buy something (some sort of converter) than tinker with the electronics, as I have no experience in electronics and live in a rented flat, which I really don't want to burn down (not that I would if i owned it).</p>
<p>Any help is massively appreciated, thanks in advance!</p>
<hr>
<p><strong>Update</strong> </p>
<p>I have done what @Oscar suggested and also bought a multimeter to test the circuitry. I plugged my Anet A6 into the mains power supply and turned it on, but still nothing happens... the LED doesn't light up, not does the LCD screen turn on. </p>
<p>I tested the voltage of the power supply whilst it was turned on across connections 6 and 8 in the video below (taken from the assembly instructions video, 12 mins 46 seconds):</p>
<p><a href="https://youtu.be/mQzOHL_89nc?t=766" rel="nofollow noreferrer">Assembly Instructions Video, 12:46</a></p>
<p>The 6 and 8 connections correspond to the output from the transformer (ie the connections that would be connected to the mainboard). There was no voltage reading at all when I measured it here with the multimeter. Does this indicate that there is a problem with the transformer/power supply, or is this expected? Or am I testing in the wrong place and there is a better place to test when the printer is on to determine what the problem might be?</p>
| <p>The UK uses 230 V mains voltage. The 220 V designation is from the past, Europe is now using 230 V. You do not have to worry about the frequency.</p>
<p>You should place the switch to 220 V and plug the cord into the socket. The printer should start immediately booting (cycling) the printer firmware, the LCD should light up and the cold end cooling fan will spin (annoyingly).</p>
<p>If nothing happens, you need to check the Power Supply Unit (PSU) and all cables for proper connection (does the fan of the PSU spin if it has one, you should at least see a led light up). A multimeter is not expensive and generally very valuable to test if it outputs 12 V. That way you know the PSU is working or not, if it works the problem is at the main printer board.</p>
<p>As these PSU's are pretty cheap and faulty, you could well have received a broken one. </p>
<hr>
<p><strong><em>How to measure the voltage?</em></strong></p>
<p><em>If you look at the connection terminals you will find labels above them. Measuring position 6 and 8 might be the incorrect ones, this depends on your PSU. If you have exactly the same PSU as from the linked video, measuring between 6 and 8 would be correct:</em></p>
<p><a href="https://i.stack.imgur.com/Z39Wq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z39Wq.png" alt="enter image description here"></a></p>
<p><em>From the image above the connections from left to right (for other PSU units, the order may be different, I have units where the connection to the mains is on the right):</em></p>
<ul>
<li><em><code>L</code>, <code>N</code> and <code>ground</code> are used for connection to the mains,</em></li>
<li><em><code>COM</code> (stands for common or 0 V) or sometimes denoted as <code>-V</code> is the output ground (negative, connection for the black wires) and</em></li>
<li><em><code>+V</code> is positive, connection for the red wires.</em></li>
</ul>
<p><em>You need to measure the voltage difference over <code>COM</code> and <code>+V</code>, this should be the voltage of the power supply. Ideally you measure the voltage when the power supply is delivering a load (e.g. directly connected to a strip of LEDs or directly connected to the heated bed; some faulty PSU crash in under load, this can be seen by a lower voltage than the rated voltage).</em></p>
<p><em>If the PSU is correctly wired, your fuse is not broken, does not have a LED lighted and the voltage is zero the unit is defective.</em></p>
| <p>@Oscar was correct, so long as the switch is set at 220 V, the printer will turn on. I am adding this answer to help anyone else who has a similar problem.</p>
<p>I strongly recommend that you buy a multimeter if you have any power supply issues, as this helped me to figure out what was wrong.</p>
<p>There were three issues that needed to be rectified before my printer would turn on. The first was that I had bought a fairly cheap EU to UK plug converter from my local supermarket. This was mistake number one, as the quality was low and there was no ground pin for the power supply (which is dangerous). I plugged the EU plug into my converter, and then the converter into the UK mains socket, and it would not turn on. By using my multimeter I was able to figure out that the converter was a piece of rubbish. With the plug still plugged into the converter, but the converter removed from the mains, I touched my multimeter cable, whilst in continuity mode, on one of the three pins on the UK side of the converter (the bit that goes into the wall), and the other cable onto one of the terminals that I had connected the power cable to the power supply with. I touched each terminal in sequence to see if it was electrically connected to the pin on the converter. I repeated this in sequence and identified that the live pin on the converter was not connected, and so no current could flow when plugged into mains. I immediately defenestrated the converter. Here is the replacement that I bought:</p>
<p><a href="https://www.amazon.co.uk/gp/product/B00QGYY5DC/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1" rel="nofollow noreferrer">EU to UK converter</a></p>
<p>The next issue was that the power supply cable was wired up incorrectly (or at least unintuitively). In continuity mode again, I touched one multimeter cable to one pin on the EU plug of my power supply, and the other to one of the terminals (which were connected to the wiring of my power cable.) I discovered that the live and neutral wires of the power cable were wired the wrong way round on the plug (in order for it to be used in a UK converter, not sure what the wiring convention is on mainland EU). In the UK, the right hole in the mains socket is live, the left is neutral, and the top one is ground. On the EU plug I had been provided with, the left pin was live and the right pin was neutral. If I were to plug this in to the new converter in the normal orientation (cable coming out the bottom side), the live and neutral would have been connected incorrectly (plug-live -> converter-neutral, plug-neutral -> converter-live). Hence, in order for the pins to be situated correctly in the converter, and subsequently in the mains socket, the EU plug had to be plugged into the converter upside down (plug-live -> converter-live, plug-neutral -> converter-neutral). </p>
<p><strong>Finally, the official instructions for the wiring of the Anet A6 are wrong.</strong> If you follow the youtube video that they provide, you will see that the positive terminal is closest to the bottom of the diagram (on the thermistor/endstop side of the board). <a href="https://youtu.be/mQzOHL_89nc?t=1530" rel="nofollow noreferrer">Mainboard wiring instructions</a>. THIS IS INCORRECT. Check your board, as the polarity will be printed onto the board. If you wire up the mainboard according to the instructions, your V+ wire will go to the negative terminal of the mainboard and, your positive terminal will go to the COM wire of the power supply. This is wrong, as the positive terminal should be connected to the V+ wire, and the negative to COM.</p>
<p>Hopefully this helps save someone some frustration and time!</p>
| 1,551 |
<p>I have a sproc that puts 750K records into a temp table through a query as one of its first actions. If I create indexes on the temp table before filling it, the item takes about twice as long to run compared to when I index after filling the table. (The index is an integer in a single column, the table being indexed is just two columns each a single integer.)</p>
<p>This seems a little off to me, but then I don't have the firmest understanding of what goes on under the hood. Does anyone have an answer for this? </p>
| <p>If you create a clustered index, it affects the way the data is physically ordered on the disk. It's better to add the index after the fact and let the database engine reorder the rows when it knows how the data is distributed.</p>
<p>For example, let's say you needed to build a brick wall with numbered bricks so that those with the highest number are at the bottom of the wall. It would be a difficult task if you were just handed the bricks in random order, one at a time - you wouldn't know which bricks were going to turn out to be the highest numbered, and you'd have to tear the wall down and rebuild it over and over. It would be a lot easier to handle that task if you had all the bricks lined up in front of you, and could organize your work.</p>
<p>That's how it is for the database engine - if you let it know about the whole job, it can be much more efficient than if you just feed it a row at a time.</p>
| <p>this is because if the data you insert is not in the order of the index, SQL will have to split pages to make room for additional rows to keep them together logically</p>
| 4,851 |
<p>I'm planning to write a simple J2SE application to aggregate information from multiple web sources.</p>
<p>The most difficult part, I think, is extraction of meaningful information from web pages, if it isn't available as RSS or Atom feeds. For example, I might want to extract a list of questions from stackoverflow, but I absolutely don't need that huge tag cloud or navbar.</p>
<p>What technique/library would you advice?</p>
<p><strong>Updates/Remarks</strong></p>
<ul>
<li>Speed doesn't matter — as long as it can parse about 5MB of HTML in less than 10 minutes.</li>
<li>It sould be really simple.</li>
</ul>
| <p>You could look at how httpunit does it. They use couple of decent html parsers, one is nekohtml.
As far as getting data you can use whats built into the jdk (httpurlconnection), or use apache's </p>
<p><a href="http://hc.apache.org/httpclient-3.x/" rel="nofollow noreferrer">http://hc.apache.org/httpclient-3.x/</a></p>
| <p>In short, you may either parse the whole page and pick things you need(for speed I recommend looking at SAXParser) or running the HTML through a regexp that trims of all of the HTML... you can also convert it all into DOM, but that's going to be expensive especially if you're shooting for having a decent throughput.</p>
| 9,701 |
<p>What is the main difference between an inner class and a static nested class in Java? Does design / implementation play a role in choosing one of these?</p>
| <p>From the <a href="http://java.sun.com/docs/books/tutorial/java/javaOO/nested.html" rel="noreferrer">Java Tutorial</a>:</p>
<blockquote>
<p>Nested classes are divided into two categories: static and non-static. Nested classes that are declared static are simply called static nested classes. Non-static nested classes are called inner classes. </p>
</blockquote>
<p>Static nested classes are accessed using the enclosing class name:</p>
<pre><code>OuterClass.StaticNestedClass
</code></pre>
<p>For example, to create an object for the static nested class, use this syntax:</p>
<pre><code>OuterClass.StaticNestedClass nestedObject = new OuterClass.StaticNestedClass();
</code></pre>
<p>Objects that are instances of an inner class exist within an instance of the outer class. Consider the following classes:</p>
<pre><code>class OuterClass {
...
class InnerClass {
...
}
}
</code></pre>
<p>An instance of InnerClass can exist only within an instance of OuterClass and has direct access to the methods and fields of its enclosing instance.</p>
<p>To instantiate an inner class, you must first instantiate the outer class. Then, create the inner object within the outer object with this syntax:</p>
<pre><code>OuterClass outerObject = new OuterClass()
OuterClass.InnerClass innerObject = outerObject.new InnerClass();
</code></pre>
<p>see: <a href="http://download.oracle.com/javase/tutorial/java/javaOO/nested.html" rel="noreferrer">Java Tutorial - Nested Classes</a></p>
<p>For completeness note that there is also such a thing as an <a href="https://stackoverflow.com/questions/20468856/is-it-true-that-every-inner-class-requires-an-enclosing-instance">inner class <em>without</em> an enclosing instance</a>:</p>
<pre><code>class A {
int t() { return 1; }
static A a = new A() { int t() { return 2; } };
}
</code></pre>
<p>Here, <code>new A() { ... }</code> is an <em>inner class defined in a static context</em> and does not have an enclosing instance.</p>
| <p>The difference is that a nested class declaration that is also static can be instantiated outside of the enclosing class.</p>
<p>When you have a nested class declaration that is <em>not</em> static, also known as an <em>inner class</em>, Java won't let you instantiate it except via the enclosing class. The object created out of the inner class is linked to the object created from the outer class, so the inner class can reference the fields of the outer.</p>
<p>But if it's static, then the link does not exist, the outer fields cannot be accessed (except via an ordinary reference like any other object) and you can therefore instantiate the nested class by itself.</p>
| 9,569 |
<p>I have read about partial methods in the latest <a href="http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx" rel="noreferrer">C# language specification</a>, so I understand the principles, but I'm wondering how people are actually using them. Is there a particular design pattern that benefits from partial methods?</p>
| <p>Partial methods have been introduced for similar reasons to why partial classes were in .Net 2.</p>
<p>A partial class is one that can be split across multiple files - the compiler builds them all into one file as it runs.</p>
<p>The advantage for this is that Visual Studio can provide a graphical designer for part of the class while coders work on the other.</p>
<p>The most common example is the Form designer. Developers don't want to be positioning buttons, input boxes, etc by hand most of the time.</p>
<ul>
<li>In .Net 1 it was auto-generated code in a <code>#region</code> block</li>
<li>In .Net 2 these became separate designer classes - the form is still one class, it's just split into one file edited by the developers and one by the form designer</li>
</ul>
<p>This makes maintaining both much easier. Merges are simpler and there's less risk of the VS form designer accidentally undoing coders' manual changes.</p>
<p>In .Net 3.5 Linq has been introduced. Linq has a DBML designer for building your data structures, and that generates auto-code.</p>
<p>The extra bit here is that code needed to provide methods that developers might want to fill in.</p>
<p>As developers will extend these classes (with extra partial files) they couldn't use abstract methods here.</p>
<p>The other issue is that most of the time these methods wont be called, and calling empty methods is a waste of time.</p>
<p>Empty methods <a href="https://stackoverflow.com/questions/11783/in-net-will-empty-method-calls-be-optimized-out">are not optimised out</a>.</p>
<p>So Linq generates empty partial methods. If you don't create your own partial to complete them the C# compiler will just optimise them out.</p>
<p>So that it can do this partial methods always return void.</p>
<p>If you create a new Linq DBML file it will auto-generate a partial class, something like</p>
<pre><code>[System.Data.Linq.Mapping.DatabaseAttribute(Name="MyDB")]
public partial class MyDataContext : System.Data.Linq.DataContext
{
...
partial void OnCreated();
partial void InsertMyTable(MyTable instance);
partial void UpdateMyTable(MyTable instance);
partial void DeleteMyTable(MyTable instance);
...
</code></pre>
<p>Then in your own partial file you can extend this:</p>
<pre><code>public partial class MyDataContext
{
partial void OnCreated() {
//do something on data context creation
}
}
</code></pre>
<p>If you don't extend these methods they get optimised right out.</p>
<p>Partial methods can't be public - as then they'd have to be there for other classes to call. If you write your own code generators I can see them being useful, but otherwise they're only really useful for the VS designer.</p>
<p>The example I mentioned before is one possibility:</p>
<pre><code>//this code will get optimised out if no body is implemented
partial void DoSomethingIfCompFlag();
#if COMPILER_FLAG
//this code won't exist if the flag is off
partial void DoSomethingIfCompFlag() {
//your code
}
#endif
</code></pre>
<p>Another potential use is if you had a large and complex class spilt across multiple files you might want partial references in the calling file. However I think in that case you should consider simplifying the class first.</p>
| <p>Here is the best resource for partial classes in C#.NET 3.0: <a href="http://msdn.microsoft.com/en-us/library/wa80x488(VS.85).aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/library/wa80x488(VS.85).aspx</a></p>
<p>I try to avoid using partial classes (with the exception of partials created by Visual Studio for designer files; those are great). To me, it's more important to have all of the code for a class in one place. If your class is well designed and represents one thing (<a href="http://en.wikipedia.org/wiki/Single_responsibility_principle" rel="nofollow noreferrer">single responsibility principle</a>), then all of the code for that one thing should be in one place.</p>
| 6,360 |
<p>Slashdot has a little widget that allows you to tweak your comment threshold to filter out down-modded comments. It will be in one place if you scroll to the top of the page, and as you scroll down, at some point, where its original home is about to scroll off the page, it will switch to fixed position, and stay on your screen. (To see an example, click <a href="http://news.slashdot.org/news/08/09/10/2257242.shtml" rel="nofollow noreferrer">here</a>.)</p>
<p>My question is, how can I accomplish the same effect of having a menu be in one place when scrolled up, and switch to fixed position as the user scrolls down? I know this will involve a combination of CSS and javascript. I'm not necessarily looking for a full example of working code, but what steps will my code need to go through?</p>
| <p>Okay, I figured it out. I will post it here in case it help anyone else. This solution uses prototype, and an internal library that gives me the registerEvent, getElementX and getElementY functions, which do what you would think.</p>
<pre><code>var MenuManager = Class.create({
initialize: function initialize(menuElt) {
this.menu = $(menuElt);
this.homePosn = { x: getElementX(this.menu), y: getElementY(this.menu) };
registerEvent(document, 'scroll', this.handleScroll.bind(this));
this.handleScroll();
},
handleScroll: function handleScroll() {
this.scrollOffset = document.viewport.getScrollOffsets().top;
if (this.scrollOffset > this.homePosn.y) {
this.menu.style.position = 'fixed';
this.menu.style.top = 0;
this.menu.style.left = this.homePosn.x;
} else {
this.menu.style.position = 'absolute';
this.menu.style.top = null;
this.menu.style.left = null;
}
}
});
</code></pre>
<p>Just call the constructor with the id of your menu, and the class will take it from there.</p>
| <p>For a demo but not based on the code above checkout:</p>
<p><a href="https://stackoverflow.com/questions/2352896/fixing-tabs-to-the-top-of-the-page-but-underneath-the-header">fixed-floating-elements</a></p>
| 8,059 |
<p>I would like to retrieve the ethernet address of the network interface that is used to access a particular website.</p>
<p>How can this be done in Java?</p>
<p><strong>Solution</strong> Note that the accepted solution of <code>getHardwareAddress</code> is only available in Java 6. There does not seem to be a solution for Java 5 aside from executing i(f|p)confing.</p>
| <p><a href="http://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getHardwareAddress%28%29" rel="noreferrer"> java.net.NetworkInterface.getHardwareAddress</a> (method added in Java 6)</p>
<p>It has to be called on the machine you are interested in - the MAC is not transferred across network boundaries (i.e. LAN and WAN). If you want to make use of it on a website server to interrogate the clients, you'd have to run an applet that would report the result back to you.</p>
<p>For Java 5 and older I found code <a href="http://forums.sun.com/thread.jspa?messageID=3424868#4204392" rel="noreferrer">parsing output of command line tools on various systems</a>.</p>
| <p>You can get the address that connects to your ServerSocket using <a href="http://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getInetAddresses()" rel="nofollow noreferrer">http://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getInetAddresses()</a></p>
<p>However if your client is connecting via a NAT, then you will get the address of the router and NOT the Ethernet address. If it is on your local network (via a hub/switch, no router with NAT) the it wil work as intended.</p>
| 5,411 |
<p>In PHP, how can I replicate the expand/contract feature for Tinyurls as on search.twitter.com?</p>
| <p>If you want to find out where a tinyurl is going, use fsockopen to get a connection to tinyurl.com on port 80, and send it an HTTP request like this</p>
<pre><code>GET /dmsfm HTTP/1.0
Host: tinyurl.com
</code></pre>
<p>The response you get back will look like</p>
<pre><code>HTTP/1.0 301 Moved Permanently
Connection: close
X-Powered-By: PHP/5.2.6
Location: http://en.wikipedia.org/wiki/TinyURL
Content-type: text/html
Content-Length: 0
Date: Mon, 15 Sep 2008 12:29:04 GMT
Server: TinyURL/1.6
</code></pre>
<p>example code...</p>
<pre><code><?php
$tinyurl="dmsfm";
$fp = fsockopen("tinyurl.com", 80, $errno, $errstr, 30);
if (!$fp) {
echo "$errstr ($errno)<br />\n";
} else {
$out = "GET /$tinyurl HTTP/1.0\r\n";
$out .= "Host: tinyurl.com\r\n";
$out .= "Connection: Close\r\n\r\n";
$response="";
fwrite($fp, $out);
while (!feof($fp)) {
$response.=fgets($fp, 128);
}
fclose($fp);
//now parse the Location: header out of the response
}
?>
</code></pre>
| <p>Another simple and easy way:</p>
<pre><code><?php
function getTinyUrl($url) {
return file_get_contents('http://tinyurl.com/api-create.php?url='.$url);
}
?>
</code></pre>
| 8,769 |
<p>When a java based application starts to misbehave on a windows machine, you want to be able to kill the process in the task manager if you can't quit the application normally. Most of the time, there's more than one java based application running on my machine. Is there a better way than just randomly killing java.exe processes in hope that you'll hit the correct application eventually?</p>
<p><strong>EDIT:</strong> Thank you to all the people who pointed me to Sysinternal's Process Explorer - Exactly what I'm looking for!</p>
| <p>Download <a href="http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx" rel="noreferrer">Sysinternal's Process Explorer</a>. It's a task manager much more powerfull than Windows's own manager. </p>
<p>One of it's features is that you can see all the resources that each process is using (like registry keys, hard disk directories, named pipes, etc). So, browsing the resources that each java.exe process holds might help you determine wich one you want to kill. I usually find out by looking for the one that's using a certain log file directory.</p>
| <p>I'd suggest downloading Process Explorer from Sysinternals and looking at the different java.exe processes more closesly, that way you can get a better idea of which one to kill.</p>
<p><a href="http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx" rel="nofollow noreferrer">http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx</a></p>
<p>It's very intuitive and you can find the java.exe processes and right click and goto their properties, from there you can see their command line, time of creation, etc which can help you find the process you want to kill.</p>
<p>Hope it helps.</p>
| 8,782 |
<p>I've been writing C and C++ code for almost twenty years, but there's one aspect of these languages that I've never really understood. I've obviously used regular casts i.e.</p>
<pre><code>MyClass *m = (MyClass *)ptr;
</code></pre>
<p>all over the place, but there seem to be two other types of casts, and I don't know the difference. What's the difference between the following lines of code?</p>
<pre><code>MyClass *m = (MyClass *)ptr;
MyClass *m = static_cast<MyClass *>(ptr);
MyClass *m = dynamic_cast<MyClass *>(ptr);
</code></pre>
| <h2>static_cast</h2>
<p><code>static_cast</code> is used for cases where you basically want to reverse an implicit conversion, with a few restrictions and additions. <code>static_cast</code> performs no runtime checks. This should be used if you know that you refer to an object of a specific type, and thus a check would be unnecessary. Example:</p>
<pre><code>void func(void *data) {
// Conversion from MyClass* -> void* is implicit
MyClass *c = static_cast<MyClass*>(data);
...
}
int main() {
MyClass c;
start_thread(&func, &c) // func(&c) will be called
.join();
}
</code></pre>
<p>In this example, you know that you passed a <code>MyClass</code> object, and thus there isn't any need for a runtime check to ensure this.</p>
<h2>dynamic_cast</h2>
<p><code>dynamic_cast</code> is useful when you don't know what the dynamic type of the object is. It returns a null pointer if the object referred to doesn't contain the type casted to as a base class (when you cast to a reference, a <code>bad_cast</code> exception is thrown in that case).</p>
<pre><code>if (JumpStm *j = dynamic_cast<JumpStm*>(&stm)) {
...
} else if (ExprStm *e = dynamic_cast<ExprStm*>(&stm)) {
...
}
</code></pre>
<p>You can <strong>not</strong> use <code>dynamic_cast</code> for downcast (casting to a derived class) <strong>if</strong> the argument type is not polymorphic. For example, the following code is not valid, because <code>Base</code> doesn't contain any virtual function:</p>
<pre><code>struct Base { };
struct Derived : Base { };
int main() {
Derived d; Base *b = &d;
dynamic_cast<Derived*>(b); // Invalid
}
</code></pre>
<p>An "up-cast" (cast to the base class) is always valid with both <code>static_cast</code> and <code>dynamic_cast</code>, and also without any cast, as an "up-cast" is an implicit conversion (assuming the base class is accessible, i.e. it's a <code>public</code> inheritance).</p>
<h2>Regular Cast</h2>
<p>These casts are also called C-style cast. A C-style cast is basically identical to trying out a range of sequences of C++ casts, and taking the first C++ cast that works, without ever considering <code>dynamic_cast</code>. Needless to say, this is much more powerful as it combines all of <code>const_cast</code>, <code>static_cast</code> and <code>reinterpret_cast</code>, but it's also unsafe, because it does not use <code>dynamic_cast</code>.</p>
<p>In addition, C-style casts not only allow you to do this, but they also allow you to safely cast to a private base-class, while the "equivalent" <code>static_cast</code> sequence would give you a compile-time error for that.</p>
<p>Some people prefer C-style casts because of their brevity. I use them for numeric casts only, and use the appropriate C++ casts when user defined types are involved, as they provide stricter checking.</p>
| <p><code>dynamic_cast</code> has runtime type checking and only works with references and pointers, whereas <code>static_cast</code> does not offer runtime type checking. For complete information, see the MSDN article <em><a href="http://msdn.microsoft.com/en-us/library/c36yw7x9(VS.80).aspx" rel="noreferrer">static_cast Operator</a></em>.</p>
| 4,767 |
<p>How can my app get a valid "last time connected to domain" timestamp from Windows, even when the app is running offline?</p>
<p>Background:
I am writing an application that is run on multiple client machines throughout my company. All of these client machines are on one of the AD domains implemented by my company. This application needs to take certain measures if the client machine has not communicated with the AD for a period of time.</p>
<p>An example might be that a machine running this app is stolen. After e.g. 4 weeks, the application refuses to work because it detects that the machine has not communicated with its AD domain for 4 weeks.</p>
<p>Note that this must not be tied to a user account because the app might be running as a Local Service account. It the computer-domain relationship that I'm interested in.</p>
<p>I have considered and rejected using <code>WinNT://<domain>/<machine>$,user</code> because it doesn't work while offline. Also, any <code>LDAP://...</code> lookups won't work while offline.</p>
<p>I have also considered and rejected scheduling this query on a dayly basis and storing the timestamp in the registry or a file. This solutions requires too much setup and coding. Besides this value simply MUST be stored locally by Windows.</p>
| <p>I don't believe this value is stored on the client machine. It's stored in Active Directory, and you can get a list of inactive machines using the <a href="http://technet.microsoft.com/en-us/library/cc730720.aspx" rel="nofollow noreferrer">Dsquery</a> tool.</p>
<p>The best option is to have your program do a simple test such as connection to a DC, and then store the timestamp of that action.</p>
| <p>IMHO i dont think the client machine would store a timestamp of the last time it communicated with AD. This information is stored in active directory itself (ie. on the DC)</p>
<p>Once a user logs into say a Windows machine the credentials are cached. If that machine is disconnected from the network the credentials will last forever. You can turn this feature off with group policies, so that the machine does not cache any credentials.</p>
| 8,883 |
<p>Looking in the Cura interface, I can set any whole number 0-100 for the infill percentage. Does Cura have an algorithm to calculate a pattern for any of those possible values, or does it have a few patterns where it selects the closest one?</p>
| <p>I fear I'm going to deny your question. The infil percentage and the infil pattern are two orthogonal properties, both of which contribute to the strength, density, mass, and print time of an object. Since there's no way for an algo to "know" what your desired outcome is, this can't be done.</p>
<p>Note - I used 'orthogonal' in the Hilbert sense, meaning neither property is a function of the other. </p>
| <p>From what I can see, the only way to change the pattern is the manual way.
There is still an open gate to create a plugin that could select the pattern - but that is rather a complex solution - unless you will have a fully automated pipeline</p>
| 994 |
<p>Does anyone know of any good tools (I'm looking for IDEs) to write assembly on the Mac. Xcode is a little cumbersome to me.</p>
<p>Also, on the Intel Macs, can I use generic x86 asm? Or is there a modified instruction set? Any information about post Intel.</p>
<p>Also: I know that on windows, asm can run in an emulated environment created by the OS to let the code think it's running on its own dedicated machine. Does OS X provide the same thing?</p>
| <p>After installing any version of Xcode targeting Intel-based Macs, you should be able to write assembly code. Xcode is a suite of tools, only one of which is the IDE, so you don't have to use it if you don't want to. (That said, if there are specific things you find clunky, please file a bug at <a href="http://bugreport.apple.com/" rel="noreferrer" title="Apple's bug reporter">Apple's bug reporter</a> - every bug goes to engineering.) Furthermore, installing Xcode will install both the Netwide Assembler (NASM) and the GNU Assembler (GAS); that will let you use whatever assembly syntax you're most comfortable with.</p>
<p>You'll also want to take a look at the <a href="https://developer.apple.com/library/mac/documentation/CompilerTools/Conceptual/LLVMCompilerOverview/index.html#//apple_ref/doc/uid/TP40010019" rel="noreferrer" title="Compiler & Debugging Guides">Compiler & Debugging Guides</a>, because those document the calling conventions used for the various architectures that Mac OS X runs on, as well as how the binary format and the loader work. The IA-32 (x86-32) calling conventions in particular may be slightly different from what you're used to.</p>
<p>Another thing to keep in mind is that the system call interface on Mac OS X is different from what you might be used to on DOS/Windows, Linux, or the other BSD flavors. System calls aren't considered a stable API on Mac OS X; instead, you always go through libSystem. That will ensure you're writing code that's portable from one release of the OS to the next.</p>
<p>Finally, keep in mind that Mac OS X runs across a pretty wide array of hardware - everything from the 32-bit Core Single through the high-end quad-core Xeon. By coding in assembly you might not be optimizing as much as you think; what's optimal on one machine may be pessimal on another. Apple regularly measures its compilers and tunes their output with the "-Os" optimization flag to be decent across its line, and there are extensive vector/matrix-processing libraries that you can use to get high performance with hand-tuned CPU-specific implementations.</p>
<p>Going to assembly for fun is great. Going to assembly for speed is not for the faint of heart these days.</p>
| <p>Forget about finding a IDE to write/run/compile assembler on Mac. But, remember mac is UNIX. See <a href="http://asm.sourceforge.net/articles/linasm.html" rel="nofollow noreferrer">http://asm.sourceforge.net/articles/linasm.html</a>. A decent guide (though short) to running assembler via GCC on Linux. You can mimic this. Macs use Intel chips so you want to look at Intel syntax.</p>
| 2,751 |
<p>I'm trying to import an XML file via a web page in a Ruby on Rails application, the code ruby view code is as follows (I've removed HTML layout tags to make reading the code easier)</p>
<pre><code><% form_for( :fmfile, :url => '/fmfiles', :html => { :method => :post, :name => 'Form_Import_DDR', :enctype => 'multipart/form-data' } ) do |f| %>
<%= f.file_field :document, :accept => 'text/xml', :name => 'fmfile_document' %>
<%= submit_tag 'Import DDR' %>
<% end %>
</code></pre>
<p>Results in the following HTML form</p>
<pre><code><form action="/fmfiles" enctype="multipart/form-data" method="post" name="Form_Import_DDR"><div style="margin:0;padding:0"><input name="authenticity_token" type="hidden" value="3da97372885564a4587774e7e31aaf77119aec62" />
<input accept="text/xml" id="fmfile_document" name="fmfile_document" size="30" type="file" />
<input name="commit" type="submit" value="Import DDR" />
</form>
</code></pre>
<p>The Form_Import_DDR method in the 'fmfiles_controller' is the code that does the hard work of reading the XML document in using REXML. The code is as follows</p>
<pre><code>@fmfile = Fmfile.new
@fmfile.user_id = current_user.id
@fmfile.file_group_id = 1
@fmfile.name = params[:fmfile_document].original_filename
respond_to do |format|
if @fmfile.save
require 'rexml/document'
doc = REXML::Document.new(params[:fmfile_document].read)
doc.root.elements['File'].elements['BaseTableCatalog'].each_element('BaseTable') do |n|
@base_table = BaseTable.new
@base_table.base_table_create(@fmfile.user_id, @fmfile.id, n)
end
</code></pre>
<p>And it carries on reading all the different XML elements in.</p>
<p>I'm using Rails 2.1.0 and Mongrel 1.1.5 in Development environment on Mac OS X 10.5.4, site DB and browser on same machine.</p>
<p>My question is this. This whole process works fine when reading an XML document with character encoding UTF-8 but fails when the XML file is UTF-16, does anyone know why this is happening and how it can be stopped?</p>
<p>I have included the error output from the debugger console below, it takes about 5 minutes to get this output and the browser times out before the following output with the 'Failed to open page'</p>
<pre><code>Processing FmfilesController#create (for 127.0.0.1 at 2008-09-15 16:50:56) [POST]
Session ID: BAh7CDoMdXNlcl9pZGkGOgxjc3JmX2lkIiVmM2I3YWU2YWI4ODU2NjI0NDM2
NTFmMDE1OGY1OWQxNSIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxh
c2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA==--dd9f588a68ed628ab398dd1a967eedcd09e505e0
Parameters: {"commit"=>"Import DDR", "authenticity_token"=>"3da97372885564a4587774e7e31aaf77119aec62", "action"=>"create", "fmfile_document"=>#<File:/var/folders/LU/LU50A0vNHA07S4rxDAOk4E+++TI/-Tmp-/CGI.3001.1>, "controller"=>"fmfiles"}
[4;36;1mUser Load (0.000350)[0m [0;1mSELECT * FROM "users" WHERE (id = 1) LIMIT 1[0m
[4;35;1mFmfile Create (0.000483)[0m [0mINSERT INTO "fmfiles" ("name", "file_group_id", "updated_at", "report_created_at", "report_link", "report_version", "option_on_open_account_name", "user_id", "option_default_custom_menu_set", "option_on_close_script", "path", "report_type", "option_on_open_layout", "option_on_open_script", "created_at") VALUES('TheTest_fp7 2.xml', 1, '2008-09-15 15:50:56', NULL, NULL, NULL, NULL, 1, NULL, NULL, NULL, NULL, NULL, NULL, '2008-09-15 15:50:56')[0m
REXML::ParseException (#<Iconv::InvalidCharacter: "਼䙍偒数 (followed by a few thousand similar looking chinese characters)
䙍偒数潲琾", ["\n"]>
/Library/Ruby/Site/1.8/rexml/encodings/ICONV.rb:7:in `conv'
/Library/Ruby/Site/1.8/rexml/encodings/ICONV.rb:7:in `decode'
/Library/Ruby/Site/1.8/rexml/source.rb:50:in `encoding='
/Library/Ruby/Site/1.8/rexml/parsers/baseparser.rb:210:in `pull'
/Library/Ruby/Site/1.8/rexml/parsers/treeparser.rb:21:in `parse'
/Library/Ruby/Site/1.8/rexml/document.rb:190:in `build'
/Library/Ruby/Site/1.8/rexml/document.rb:45:in `initialize'
</code></pre>
| <p>Rather than a rails/mongrel problem, it sounds more likely that there's an issue either with your XML file or with the way REXML handles it. You can check this by writing a short script to read your XML file directly (rather than within a request) and seeing if it still fails.</p>
<p>Assuming it does, there are a couple of things I'd look at. First, I'd check you are running the latest version of REXML. A couple of years ago there was a bug (<a href="http://www.germane-software.com/projects/rexml/ticket/63" rel="nofollow noreferrer">http://www.germane-software.com/projects/rexml/ticket/63</a>) in its UTF-16 handling. </p>
<p>The second thing I'd check is if you're issue is similar to this: <a href="http://groups.google.com/group/rubyonrails-talk/browse_thread/thread/ba7b0585c7a6330d" rel="nofollow noreferrer">http://groups.google.com/group/rubyonrails-talk/browse_thread/thread/ba7b0585c7a6330d</a>. If so you can try the workaround in that thread.</p>
<p>If none of the above helps, then please reply with more information, such as the exception you are getting when you try and read the file.</p>
| <p>Since getting this to work requires me to only change the encoding attribute of the first XML element to have the value UTF-8 instead of UTF-16, the XML file is actually UTF-8 and labelled wrongly by the application that generates it.</p>
<p>The XML file is a FileMaker DDR export produced by FileMaker Pro Advanced 8.5 on OS X 10.5.4</p>
| 8,904 |
<p>Does anyone know of a hot end that is sealed? What I meaning is that the hot end has a rubber seal where the filament enters to keep the top airtight (in order to eliminate oozing).</p>
<p>I am looking to build a dual extruder printer but, I do not want any oozing from the hot end which is not in use. I could build a system to retract and 'close' the nozzle but it would be much more elegant if it would work to just seal the top of the hot end. Thus achieving the same effect as when you pull up water with a straw by covering the top with your finger.</p>
| <p>There's a lot that can be done to improve the removability of supports, and much of this is not widely known/published.</p>
<p>One big wrong default in Cura that contributes to problems with support is <em>Limit Support Retractions</em>, which defaults to on. This causes heavy stringing between components of the support structure that should be separate, and poor layer adhesion between layers of the support <em>and between layers of whatever is printed right after the support</em> (!!), making support more brittle and difficult to remove in clean chunks. This setting should be turned off.</p>
<p>I find <em>Enable Support Brim</em> is also useful. Its nominal purpose is to make supports adhere to the bed better, but it also gives them more of a solid bottom so that the structure is rigid and admits snapping off as a chunk.</p>
<p>A nonzero <em>Support Wall Line Count</em> (it's zero by default for zigzag and most support patterns, but one by default for support tree and others) can make chunks of support easier to remove by making them more rigid.</p>
<p><em>Connect Support Lines</em> (also called <code>zig_zaggify_infill</code>) helps with rigidity too, and with reducing time wasted on retractions once you turn off <em>Limit Support Retractions</em>.</p>
<p>Aside from these less-well-known tunables, the obvious ones are <em>Support Z Distance</em> and <em>Support X/Y Distance</em>, especially Z. You can increase this slightly from the default to make supports easier to remove, but it will hurt the quality of the surface just above the support (making it less flat, more stringy like a bridge). And the biggest one of all is <em>Support Angle</em>. Generally increase it as high as you can go, after doing some test prints to determine the maximum overhang angle you can print without support. This will save material and make it easier to remove what supports remain.</p>
<p>Finally, aside from support options, you want to make sure you don't have underlying print problems causing oozing, bulging, or other dimensional-accuracy/extrusion-accuracy issues. This is because any material that is printed or expands into the wrong place will, if it's adjacent to support material, bond to the support material and make it hard to remove.</p>
| <p>You could reduce the <code>Support Density</code>:</p>
<blockquote>
<p>A higher value results in better overhangs, but the supports are harder to remove.</p>
</blockquote>
<p>Furthermore read <a href="https://3dprinting.stackexchange.com/a/7991/">this answer</a> on question: "<a href="https://3dprinting.stackexchange.com/questions/7989/difficult-to-remove-support-material/">Difficult to remove support material</a>".</p>
| 1,723 |
<p>Is there any risk of damaging stepper motors if I set too big travel speed?
What is maximum safe travel speed?</p>
<p>My printer is a German RepRap Neo.</p>
<p>I currently use 120 mm/s. Is it safe to increase this value to 200 mm/s?
What would my printer do if I set very big travel speed?</p>
| <p>Short answer <strong>no</strong></p>
<p>We use stepper drivers to limit the current, the travel speed is at capped by the amount of current supplied by the stepper drivers. This prevents the stepper motors from damaging themselves. You can set 200mm/s in the slicer, but you have no guarantee that that will be reached in real life.</p>
<p>One thing to keep in mind though is that setting your travel speed too high can induce artifacts such: shifted layers, ghosting, uneven extrusion, etc. So the best thing is to keep the speeds within the specified limits.</p>
| <p>A high speed is unlikely to be reached unless you also set a high acceleration, and acceleration is generally more likely to cause a problem (unless you reach the pulse rate limit of the drivers).</p>
<p>High acceleration will increase vibration, and critically requires higher torque from the motors. At some point, the torque will exceed the motor/drive current capability, and the motor will skip steps. As soon as this starts to happen, your print will become unusable.</p>
<p>Before reaching the point of missed steps, you're likely to see other quality issues, but unless you're in a very hot environment, unlikely to see damage to the motor. Depending on the quality and heatsinking of the stepper driver, you might see overheating here (you can check for overheating of the board though).</p>
| 1,199 |
<p>Been scouring the net for something like firewatir but for python. I'm trying to automate firefox on linux. Any suggestions?</p>
| <p>You could try <a href="http://selenium.openqa.org/" rel="noreferrer">selenium</a>.</p>
| <p>The languages of choice of Firefox is Javascript. Unless you have a specific requirement that requires Python, I would advice you to use that.</p>
| 8,498 |
<p>When connecting to remote hosts via ssh, I frequently want to bring a file on that system to the local system for viewing or processing. Is there a way to copy the file over without (a) opening a new terminal/pausing the ssh session (b) authenticating again to either the local or remote hosts which works (c) even when one or both of the hosts is behind a NAT router?</p>
<p>The goal is to take advantage of as much of the current state as possible: that there is a connection between the two machines, that I'm authenticated on both, that I'm in the working directory of the file---so I don't have to open another terminal and copy and paste the remote host and path in, which is what I do now. The best solution also wouldn't require any setup before the session began, but if the setup was a one-time or able to be automated, than that's perfectly acceptable. </p>
| <p><a href="http://zssh.sourceforge.net/" rel="noreferrer">zssh</a> (a <a href="http://en.wikipedia.org/wiki/ZMODEM" rel="noreferrer">ZMODEM</a> wrapper over openssh) does exactly what you want.</p>
<ul>
<li><p>Install <a href="http://zssh.sourceforge.net/" rel="noreferrer">zssh</a> and use it instead of openssh (which I assume that you normally use)</p></li>
<li><p>You'll have to have the <a href="http://www.ohse.de/uwe/software/lrzsz.html" rel="noreferrer">lrzsz</a> package installed on both systems.</p></li>
</ul>
<p>Then, to transfer a file <code>zyxel.png</code> from remote to local host:</p>
<pre><code>antti@local:~$ zssh remote
Press ^@ (C-Space) to enter file transfer mode, then ? for help
...
antti@remote:~$ sz zyxel.png
**B00000000000000
^@
zssh > rz
Receiving: zyxel.png
Bytes received: 104036/ 104036 BPS:16059729
Transfer complete
antti@remote:~$
</code></pre>
<p>Uploading goes similarly, except that you just switch <a href="http://linux.die.net/man/1/rz" rel="noreferrer">rz(1)</a> and <a href="http://linux.die.net/man/1/sz" rel="noreferrer">sz(1)</a>.</p>
<p>Putty users can try <a href="http://leputty.sourceforge.net/" rel="noreferrer">Le Putty</a>, which has similar functionality.</p>
| <p>You should be able to set up public & private keys so that no auth is needed. </p>
<p>Which way you do it depends on security requirements, etc (be aware that there are linux/unix ssh worms which will look at keys to find other hosts they can attack).</p>
<p>I do this all the time from behind both linksys and dlink routers. I think you may need to change a couple of settings but it's not a big deal.</p>
| 7,248 |
<p><img src="https://cdn.shopify.com/s/files/1/0046/3781/8929/files/CR-10-Max-_2.gif" alt="Sensor" /></p>
<p><a href="https://i.stack.imgur.com/9fDW1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9fDW1.jpg" alt="enter image description here" /></a>
I'm just finishing the set-up of a CR-10 Max. It is a new printer.</p>
<p>I don't manage to feed the filament through the material shortage sensor.</p>
<p>I can hear the micro switch click; the LED turns blue, then a few millimeters after that (33 mm total from the entry point), there is something that prevents the filament from going forward. I don't see any switch on the outside of the sensor, and I applied a reasonable amount of force on the filament.</p>
<p>Can you tell me how to troubleshoot this ?</p>
| <p>I've designed similar sensor casings, sometimes the filament catches a ridge/ledge or part of the cavity, even when it is chamfered or rounded. The arm of the limit switch pushes the filament up, away from the filament straight path.</p>
<p>Have you tried cutting the filament under a very sharp angle, that may work.</p>
| <p>Here is 3D model that better explains why it was catching and how to remedy the problem:</p>
<p><a href="https://i.stack.imgur.com/ooe48.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ooe48.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/dTy92.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dTy92.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/m4Y91.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m4Y91.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/Pv4Op.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pv4Op.png" alt="enter image description here" /></a></p>
<p>The filament is likely to have some twists and the bevel you create may not be in the correct orientation once it catches the ridge.</p>
<p>The out hole is not chamfered on the inside, as it is drilled from the outside, and I guess a chamfer on the inside is not available at this price range.</p>
<p>I found that twisting the filament 90° at most one way or the other always helps the filament finds its way. It can be tricky because of the small diameter/stiffness of the filament, with the spool still attached to the other end.</p>
| 1,704 |
<p>I am looking to manage a SQL Server 2008 DB using Management Studio 2005. The reason for this is because our server is a 64-bit machine and we only have the 64-bit version of the software. </p>
<p>Is this possible? </p>
<p>How about managing a SQL Server 2005 DB using Management Studio 2008?</p>
| <p>UPDATE: You can use <a href="http://support.microsoft.com/kb/943656" rel="noreferrer">Cumulative update package 5</a> for SQL Server 2005 Service Pack 2 to connect to 2008.</p>
<p>FIX:
50002151 946127 (<a href="http://support.microsoft.com/kb/946127/" rel="noreferrer">http://support.microsoft.com/kb/946127/</a>) FIX: You may experience problems when you use SQL Server Management Studio in SQL Server 2005 to connect to an instance of SQL Server 2008 </p>
| <p>unless I'm mistaken and things have changed, you cannot use sql server 2008 to save a backup which restores to sql server 2005. I found this out the hard way :(</p>
| 7,535 |
<p>I'm looking to increase the printing speed, through increasing the volumetric flow rate, which is currently at 5 mm³/s. Larger amounts cause the feeding mechanism to skip steps.</p>
<p>I'm running at 190 °C, which helps with easier bridging less stringing and personally see no reason to increase the temperature to the popular 205 °C.</p>
<p>So, my thought process is the following: since I run at lesser temperature, there is still potential for the heating block to provide more heat and I need a longer nozzle to accumulate more heat and provide more surface area for transfer to the fillament (PLA), to speed up the melting of the plastic inside the nozzle (<em>which seems to be the bottleneck</em>). </p>
<p><em>That's similar to using larger tips for soldring iron, when faced with heating up large surfaces in order to desolder something large, since we need to stay at precise temperature, and need to increase the heat supply as well.</em></p>
<p>The suggested solution is to switch to the E3D's Volcano "everything included" kit. Which is nice and cool, but I don't think it's that necessary.</p>
<ul>
<li>Is it possible to just switch to a volcano nozzle? (Manufacturer#: VOLCANO-NOZZLE-175-0400)</li>
<li>Would it actually noticeably help to increase the extrusion speed?</li>
</ul>
<p><em>Current setup:</em></p>
<ul>
<li><em>Ender 3 Pro, no mods</em></li>
<li><em>Classic 0.4 mm nozzle</em></li>
</ul>
| <p>This is opinion-based, but the volcano has drawbacks that affect print quality, mine is oozier and sloppier than a V6 with the shorter, more precise melt zone. It isn’t a slam dunk upgrade, more of a special applications part. I think there is no point to using a Volcano unless you’re running big nozzles fast, like .8 mm.</p>
<p>Your 5 mm<sup>3</sup>/s throughput is low, the V6 is generally known as a ~13 mm<sup>3</sup>/s volumetric throughput, vs the Volcano at 25 mm<sup>3</sup>/s. This is due to the low temperature you favor, possibly something not ideal with your extruder. I could see...</p>
<ul>
<li><p>just living with the slow speed. I realize I vastly prefer print quality over print speed because one takes no human interaction and the other does.</p>
</li>
<li><p>do what everyone else does. go hotter, plastic viscosity goes way down even with a 5-10 degree increase</p>
</li>
<li><p>increase extruder torque. If you can increase stepper current safely (know the limit for your driver and motor!) with a trim pot on the stepper driver, you may be able to get more torque before the motor skips steps. This can increase motor temperature. If you get more torque, at some point the filament will slip and get carved up by the extruder’s hobbed gear. Double geared extruder designs like Bondtech can grip the filament from both sides and get more traction on the filament if you want to get diabolical shoving the filament.</p>
</li>
<li><p>use a larger nozzle for faster printing at your preferred temp. I’m loving the .6 mm nozzle for bigger prints. It has most of the detail of the .4 mm but double the plastic comes out. A larger nozzle hole means less pressure in the nozzle at a given temp and extruder feed rate</p>
</li>
</ul>
<p>If you think the extruder might not be all it can be, try heating up the nozzle hotter than usual, and get the extruder going slow and steady, and pull a little on the filament by hand, see if it skips steps easily with a little resistance. It should pull pretty strong. I had a failing wire to my extruder that manifested in wimpy extrusion.</p>
| <p>Yes, the Volcano or the Super Volcano allow for larger flow rate (typically when using larger nozzles), that is where they were designed for. Just the nozzle will not help you, you need this larger nozzle shaft to be inside a Volcano heater block, else you cannot transfer the heat.</p>
<p>According to measurements from Metaform, the volumetric flow of a Volcano hotend is larger than the regular E3D V6 hotend.</p>
<p><a href="https://i.stack.imgur.com/5p6Le.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5p6Le.png" alt="enter image description here" /></a></p>
| 1,651 |
<p>I own a delta 3D printer. The problem is that, at the beginning of a print the extruder outputs dirty filament. I want a clean filament flow at the start of my prints!</p>
<p>How can I make the hotend exit the print surface (glass plate) by 10mm, extrude the bad filament and go back to printing again? Can this be done with G-code?</p>
<p>My Z high is 190 mm and the glass plate diameter is 120 mm. I'm using Marlin + Ramps 1.4. </p>
<p>I'm using Repetier-Host and CuraEngine as Slicer, but I really would like a G-code that can work on multiple environments like Cura and Repetier. I just want to add it to the start G-code and print!</p>
| <p>You can achieve this using the <code>G1</code> command. I don't know your exact printer, but you should be able to use something like this (add to the start G-code in your slicer):</p>
<pre><code>G1 X0 Y62 Z0.2 F9000 ; Move slightly past edge of bed
G92 E0 ; Zero extruder position
G0 E1 F100 ; Extrude 1mm of filament
G92 E0 ; Zero again
G1 X0 Y0 F9000 ; Move back to center of bed
</code></pre>
<p>The first line moves the extruder to slightly past the edge of the bed (since the diameter is 120, the radius is 60, and 62 is slightly past the maximum radius). I've set Z to 0.2mm to avoid hitting the plate, but you might be able to lower this.</p>
<p>The next 3 lines zero the extruder position, extrudes 1mm of filament, and resets it to zero (when starting a print the slicer expects E to start at 0).</p>
<p>The final line moves back to the bed center. This might not be neccesary (you might be able to replace this line with just <code>G1 F9000</code> to set the feedrate back to something that makes sense for travel moves) because you don't need to move back explicitly: the slicer will take care of moving the head in position to start the print.</p>
| <p>A lot of slicers will have a Wipe option. Here are some examples:</p>
<ul>
<li><p>See <a href="https://jinschoi.github.io/simplify3d-docs/" rel="nofollow noreferrer">Unofficial Simplify3D Documentation</a>. Go to the section talking about <em>Wipe Nozzle</em>, under the heading <strong>Extruder Tab</strong></p>
<blockquote>
<p>Two more ooze-fighting options are Coast at end and Wipe nozzle. Coast turns off the extruder the specified distance before it normally would, to drain what would have oozed as the end of a line. This can help with ooze-induced blobs at the end of lines, but if turned up too high will lead to gaps in your print walls. Changes to this setting will be visible as gaps in the g-code preview.</p>
<p>Wipe has the nozzle retrace over the start of a perimeter line at the
end of a perimeter for the specified distance with the extruder off,
to leave any ooze behind before proceeding. It is similar to Coast in
that it moves the extruder without extruding, but wipe occurs after
the end of the line while coast occurs before.</p>
</blockquote></li>
<li><p>Slic3r has some sort of coasting. But I think in their docs the option is there: <a href="http://manual.slic3r.org/expert-mode/fighting-ooze" rel="nofollow noreferrer">Slic3r Manual - Fighting Ooze</a></p>
<blockquote>
<p>Wipe before retract - Moves the nozzle whilst retracting so as to reduce the chances of a blob forming.</p>
</blockquote></li>
</ul>
<p>As you asked for G-Code here you go:</p>
<ul>
<li><p><a href="http://forums.reprap.org/read.php?4,620368" rel="nofollow noreferrer">Reprap Forum - Wipe nozzle via GCODE</a></p>
<p>Example</p>
<pre><code>;Sliced at: {day} {date} {time}
;Basic settings: Layer height: {layer_height} Walls: {wall_thickness} Fill: {fill_density}
;Print time: {print_time}
;Filament used: {filament_amount}m {filament_weight}g
;Filament cost: {filament_cost}
;M190 S{print_bed_temperature} ;Uncomment to add your own bed temperature line
;M109 S{print_temperature} ;Uncomment to add your own temperature line
G21 ;metric values
G90 ;absolute positioning
M82 ;set extruder to absolute mode
M107 ;start with the fan off
G28 X0 Y0 ;move X/Y to min endstops
G28 Z0 ;move Z to min endstops
M117 Auto-level...
G29 ;auto-level
;G92 Z-.01 ; Lower = Z Pos, Lift = Z Neg
M117 Preparing...
G1 Z10.0 F{travel_speed} ;move the platform down 15mm
G92 E0 ;zero the extruded length
G1 F100 E30 ;extrude 10mm of feed stock
G92 E0 ;zero the extruded length again
G1 F{travel_speed}
;Put printing message on LCD screen
M300 S900 P160 ;start beep
M300 S1000 P160
M300 S2000 P160
M0 ;Wait for the user
M117 Printing...
</code></pre></li>
<li><p><a href="https://forum.lulzbot.com/viewtopic.php?t=2921" rel="nofollow noreferrer">Lulzbot forum - Start GCODE Script for Wipe</a>, in particular <a href="https://forum.lulzbot.com/viewtopic.php?t=2921#p17782" rel="nofollow noreferrer">this post</a>:</p>
<p>Example</p>
<pre><code>G91 ; switch to relative positioning
G1 Z10 ; safe raise of z axis to ensure probe doesn't hit bed clamp
G90 ; switch back to absolute positioning
G28 ; home all axes
G29 ; level print bed
G1 X298 Y137 Z2 F5000 ; move to wait position right hand side of the table
G1 Z0.4 ; position nozzle
G1 E25 F300 ; purge nozzle
M400 ; wait for purge to complete
G1 X285 F1200 ; slow wipe
G1 Z0.5 F1200 ; lift
</code></pre></li>
</ul>
<p>That should get you started.</p>
| 357 |
<p>What's the best way to allow a user to browse for a file in C#?</p>
| <pre><code>using (OpenFileDialog dlg = new OpenFileDialog())
{
dlg.Title = "Select a file";
if (dlg.ShowDialog()== DialogResult.OK)
{
//do something with dlg.FileName
}
}
</code></pre>
| <p>I would say use the standard "Open File" dialog box (<a href="http://msdn.microsoft.com/en-us/library/system.windows.forms.openfiledialog.aspx" rel="nofollow noreferrer">OpenFileDialog</a>), this makes it less intimidating for new users and helps with a consistant UI.</p>
| 5,088 |
<p>How would one go about squaring the gantry relative to the frame?</p>
<p>Referring to the image below, distance A and distance B are not equal.</p>
<p>Also would this account for the reason why when I attempt to print a circle it is not perfectly circular, and when I try to print a square, it is tilted?</p>
<p><a href="https://i.stack.imgur.com/utLCW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/utLCW.jpg" alt="enter image description here" /></a></p>
| <p>You indicated that you were using 24 volts, implying you have a 24 volt bed. ATX power supplies do not have 24 volt outputs. The highest is 12 volts which would heat up the bed, but not fast or probably to full temperature.</p>
| <p>voltage = current x resistance</p>
<p>An ATX PSU is designed to only allow approximately 16 amps per pair of YELLOW and BLACK wires. The yellow is 12 V and the black is GND. If your bed were rated at 24 V then its resistance would be higher than that of the 12 V bed. The best solution for you would be to get a 12 V heated bed, as opposed to using a boost converter. Reason being is that you would need to get a boost converter that can tolerate over 200 W of power! It's just cheaper to replace the bed.</p>
<p>Finally, you need to consider the heat bed, the extruders and the steper motor's power needs before you purchase a PSU. If you had a multimeter with a current measurement, then you could accurately determine how much power each one uses and then purchase to size.
Based on your bed size (600 mm) I think that 500 W should be enough.</p>
| 1,748 |
<p>I'm about to start testing an intranet web application. Specifically, I've to determine the application's performance.</p>
<p>Please could someone suggest formal/informal standards for how I can judge the application's performance.</p>
| <p>Use some tool for stress and load testing. If you're using Java take a look at <a href="http://jakarta.apache.org/jmeter/" rel="noreferrer">JMeter</a>. It provides different methods to test you application performance. You should focus on:</p>
<ul>
<li><strong>Response time</strong>: How fast your application is running for normal requests. Test some read/write use case</li>
<li><strong>Load test</strong>: How your application behaves in high traffic times. The tool will submit several requests (you can configure that properly) during a period of time.</li>
<li><strong>Stress test</strong>: Do your application can operate during a long period of time? This test will push your application to the limits</li>
</ul>
<p>Start with this, if you're interested, there are other kinds of tests.</p>
| <p>To test the front-end then YSlow is great for getting statistics for how long your pages take to load from a user perspective. It breaks down into stats for each specfic HTTP request, the time it took, etc. Get it at <a href="http://developer.yahoo.com/yslow/" rel="nofollow noreferrer">http://developer.yahoo.com/yslow/</a></p>
<p>Firebug, of course, also is essential. You can profile your JS explicitly or in real time by hitting the profile button. Making optimisations where necessary and seeing how long all your functions take to run. This changed the way I measure the performance of my JS code. <a href="http://getfirebug.com/js.html" rel="nofollow noreferrer">http://getfirebug.com/js.html</a></p>
| 6,326 |
<p>I want to allow users to upload avatar-type images in a variety of formats (<strong>GIF, JPEG, and PNG at least</strong>), but to save them all as <strong>PNG database BLOBs</strong>. If the images are oversized, pixelwise, I want to resize them before DB-insertion.</p>
<p><strong>What is the best way to use GD to do the resizing and PNG conversion?</strong></p>
<p>Edit: Sadly, only <a href="http://php.net/manual/en/book.image.php" rel="nofollow noreferrer">GD</a> is available on the server I need to use, no <a href="http://www.imagemagick.org/script/index.php" rel="nofollow noreferrer">ImageMagick</a>.</p>
| <pre><code><?php
/*
Resizes an image and converts it to PNG returning the PNG data as a string
*/
function imageToPng($srcFile, $maxSize = 100) {
list($width_orig, $height_orig, $type) = getimagesize($srcFile);
// Get the aspect ratio
$ratio_orig = $width_orig / $height_orig;
$width = $maxSize;
$height = $maxSize;
// resize to height (orig is portrait)
if ($ratio_orig < 1) {
$width = $height * $ratio_orig;
}
// resize to width (orig is landscape)
else {
$height = $width / $ratio_orig;
}
// Temporarily increase the memory limit to allow for larger images
ini_set('memory_limit', '32M');
switch ($type)
{
case IMAGETYPE_GIF:
$image = imagecreatefromgif($srcFile);
break;
case IMAGETYPE_JPEG:
$image = imagecreatefromjpeg($srcFile);
break;
case IMAGETYPE_PNG:
$image = imagecreatefrompng($srcFile);
break;
default:
throw new Exception('Unrecognized image type ' . $type);
}
// create a new blank image
$newImage = imagecreatetruecolor($width, $height);
// Copy the old image to the new image
imagecopyresampled($newImage, $image, 0, 0, 0, 0, $width, $height, $width_orig, $height_orig);
// Output to a temp file
$destFile = tempnam();
imagepng($newImage, $destFile);
// Free memory
imagedestroy($newImage);
if ( is_file($destFile) ) {
$f = fopen($destFile, 'rb');
$data = fread($f);
fclose($f);
// Remove the tempfile
unlink($destFile);
return $data;
}
throw new Exception('Image conversion failed.');
}
</code></pre>
| <p><a href="http://www.phpit.net/article/image-manipulation-php-gd-part2/" rel="nofollow noreferrer">This article</a> seems like it would fit what you want. You'll need to change the saving imagejpeg() function to imagepng() and have it save the file to a string rather than output it to the page, but other than that it should be easy copy/paste into your existing code.</p>
| 4,224 |
<p>Trying to honor a feature request from our customers, I'd like that my application, when Internet is available, check on our website if a new version is available.</p>
<p>The problem is that I have no idea about what have to be done on the server side.</p>
<p>I can imagine that my application (developped in C++ using Qt) has to send a request (HTTP ?) to the server, but what is going to respond to this request ? In order to go through firewalls, I guess I'll have to use port 80 ? Is this correct ?</p>
<p>Or, for such a feature, do I have to ask our network admin to open a specific port number through which I'll communicate ?</p>
<hr>
<p>@<a href="https://stackoverflow.com/questions/56391/automatically-checking-for-a-new-version-of-my-application/56418#56418">pilif</a> : thanks for your detailed answer. There is still something which is unclear for me :</p>
<blockquote>
<p>like</p>
<p><code>http://www.example.com/update?version=1.2.4</code></p>
<p>Then you can return what ever you want, probably also the download-URL of the installer of the new version.</p>
</blockquote>
<p>How do I return something ? Will it be a php or asp page (I know nothing about PHP nor ASP, I have to confess) ? How can I decode the <code>?version=1.2.4</code> part in order to return something accordingly ?</p>
| <p>I would absolutely recommend to just do a plain HTTP request to your website. Everything else is bound to fail.</p>
<p>I'd make a HTTP GET request to a certain page on your site containing the version of the local application.</p>
<p>like</p>
<pre><code>http://www.example.com/update?version=1.2.4
</code></pre>
<p>Then you can return what ever you want, probably also the download-URL of the installer of the new version. </p>
<p>Why not just put a static file with the latest version to the server and let the client decide? Because you may want (or need) to have control over the process. Maybe 1.2 won't be compatible with the server in the future, so you want the server to force the update to 1.3, but the update from 1.2.4 to 1.2.6 could be uncritical, so you might want to present the client with an optional update.</p>
<p>Or you want to have a breakdown over the installed base.</p>
<p>Or whatever. Usually, I've learned it's best to keep as much intelligence on the server, because the server is what you have ultimate control over.</p>
<p>Speaking here with a bit of experience in the field, here's a small preview of what can (and will - trust me) go wrong:</p>
<ul>
<li>Your Application will be prevented from making HTTP-Requests by the various Personal Firewall applications out there.</li>
<li>A considerable percentage of users won't have the needed permissions to actually get the update process going.</li>
<li>Even if your users have allowed the old version past their personal firewall, said tool will complain because the .EXE has changed and will recommend the user not to allow the new exe to connect (users usually comply with the wishes of their security tool here).</li>
<li>In managed environments, you'll be shot and hanged (not necessarily in that order) for loading executable content from the web and then actually executing it.</li>
</ul>
<p>So to keep the damage as low as possible, </p>
<ul>
<li>fail silently when you can't connect to the update server</li>
<li>before updating, make sure that you have write-permission to the install directory and warn the user if you do not, or just don't update at all.</li>
<li>Provide a way for administrators to turn the auto-update off.</li>
</ul>
<p>It's no fun to do what you are about to do - especially when you deal with non technically inclined users as I had to numerous times.</p>
| <p>The simplest way to make this happen is to fire an HTTP request using a library like <a href="http://curl.haxx.se/libcurl/" rel="nofollow noreferrer">libcurl</a> and make it download an ini or xml file which contains the online version and where a new version would be available online.</p>
<p>After parsing the xml file you can determine if a new version is needed and download the new version with libcurl and install it.</p>
| 8,031 |
<p>My colleagues are attempting to connect BizTalk 2006 R2 via DB2/MVS adapter to a database hosted on z/OS mainframe. When testing the connecting settings, they are getting the following error</p>
<pre><code>Could not connect to data source 'New Data Source':
The network connection was terminated because the host failed to send any data.
SQLSTATE: 08S01, SQLCODE: -605
</code></pre>
<p>When putting the settings in a regular connection string and opening with .NET code, that is fine. I am new to BizTalk and DB2. Can anybody suggest what to look out for when this error surfaces?</p>
<p><strong>24 Aug 08:</strong></p>
<p>Well, if normal .NET code with a regular DB2 connection string is used, the connection can be made and queries submitted. What this DB2 adapter is reporting is it cannot even make a proper connection handshake, let alone submitting queries. I am unsure of what are the actual mechanisms involved to make a DB2 connection happen.</p>
<p><strong>25 Aug 08:</strong></p>
<blockquote>
<p>According to <a href="http://forums.microsoft.com/msdn/showpost.aspx?postid=1155829&siteid=1&sb=0&d=1&at=7&ft=11&tf=0&pageid=0" rel="nofollow noreferrer">this MSDN forums posting</a>, it seems to be a login issue.</p>
</blockquote>
<p>I have seen that and that is not the case here. If we put the user name as the Package Collection it still hits the same problem.</p>
<p><strong>26 Aug 08:</strong></p>
<p>Because of the scarcity of information regarding connecting to mainframe DB2 databases from Microsoft products, I undertook the task of inspecting raw network packets to get a clue what is going on between the .NET DB2 provider's connection (which works) and the BizTalk 2006 DB2 adapter (which bombs). I observed DB2 traffic is done using the DRDA protocol. And ultimately concluded the BizTalk adapter method fails because of what's recorded in the server's reply SECCHKRM packet</p>
<pre><code>DRDA (Security Check)
DDM (SECCHKRM)
Length: 55
Magic: 0xd0
Format: 0x02
0... = Reserved: Not set
.0.. = Chained: Not set
..0. = Continue: Not set
...0 = Same correlation: Not set
DSS type: RPYDSS (2)
CorrelId: 0
Length2: 49
Code point: SECCHKRM (0x1219)
Parameter (Severity Code)
Length: 6
Code point: SVRCOD (0x1149)
Data (ASCII):
Data (EBCDIC):
Parameter (Security Check Code)
Length: 5
Code point: SECCHKCD (0x11a4)
Data (ASCII):
Data (EBCDIC):
Parameter (Server Diagnostic Information)
Length: 34
Code point: SRVDGN (0x1153)
Data (ASCII): \304\331\304\301@\301\331z@\301\344\343\310\305\325\343\311\303\301\343\311\326\325@\206\201\211\223\205\204
Data (EBCDIC): DRDA AR: AUTHENTICATION failed
</code></pre>
<p>Why the same credentials fails here while succeeding in the .NET provider is beyond me. Right now, what I can observe is a marked difference between each method when it comes to the sequence of packets transferred.</p>
<p>.NET DB2 provider</p>
<pre><code>No. Time Source Destination Protocol Info
1 0.000000 [client IP] [DB2 server IP] TCP kpop > 50000 [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=1
2 0.000399 [DB2 server IP] [client IP] TCP 50000 > kpop [SYN, ACK] Seq=0 Ack=1 Win=16384 Len=0 MSS=1460 WS=0
3 0.000414 [client IP] [DB2 server IP] TCP kpop > 50000 [ACK] Seq=1 Ack=1 Win=65536 [TCP CHECKSUM INCORRECT] Len=0
4 0.000532 [client IP] [DB2 server IP] DRDA EXCSAT | ACCSEC
5 0.038162 [DB2 server IP] [client IP] DRDA EXCSATRD | ACCSECRD
6 0.041829 [client IP] [DB2 server IP] DRDA ACCSEC | SECCHK | ACCRDB
7 0.083626 [DB2 server IP] [client IP] TCP 50000 > kpop [ACK] Seq=108 Ack=542 Win=65535 Len=0
8 0.190534 [DB2 server IP] [client IP] DRDA ACCSECRD | SECCHKRM | ACCRDBRM | SQLCARD
9 0.199776 [client IP] [DB2 server IP] DRDA PRPSQLSTT | SQLATTR | SQLSTT | OPNQRY
10 0.293307 [DB2 server IP] [client IP] TCP [TCP segment of a reassembled PDU]
11 0.293359 [DB2 server IP] [client IP] TCP [TCP segment of a reassembled PDU]
12 0.293377 [client IP] [DB2 server IP] TCP kpop > 50000 [ACK] Seq=870 Ack=1444 Win=64092 [TCP CHECKSUM INCORRECT] Len=0
13 0.293404 [DB2 server IP] [client IP] TCP [TCP segment of a reassembled PDU]
14 0.293452 [DB2 server IP] [client IP] TCP [TCP segment of a reassembled PDU]
15 0.293461 [client IP] [DB2 server IP] TCP kpop > 50000 [ACK] Seq=870 Ack=2516 Win=65536 [TCP CHECKSUM INCORRECT] Len=0
16 0.293855 [DB2 server IP] [client IP] TCP [TCP segment of a reassembled PDU]
17 0.293908 [DB2 server IP] [client IP] DRDA SQLDARD
18 0.293918 [client IP] [DB2 server IP] TCP kpop > 50000 [ACK] Seq=870 Ack=3588 Win=64464 [TCP CHECKSUM INCORRECT] Len=0
19 0.293957 [DB2 server IP] [client IP] DRDA QRYDSC
20 0.294008 [DB2 server IP] [client IP] DRDA QRYDTA
21 0.294017 [client IP] [DB2 server IP] TCP kpop > 50000 [ACK] Seq=870 Ack=4660 Win=65536 [TCP CHECKSUM INCORRECT] Len=0
22 0.294023 [DB2 server IP] [client IP] DRDA SQLCARD
23 0.295346 [client IP] [DB2 server IP] DRDA RDBCMM
24 0.297868 [DB2 server IP] [client IP] DRDA ENDUOWRM | SQLCARD
25 0.421392 [client IP] [DB2 server IP] DRDA PRPSQLSTT | SQLATTR | SQLSTT | OPNQRY
26 0.456504 [DB2 server IP] [client IP] DRDA SQLDARD | OPNQRYRM | TYPDEFNAM | QRYDSC | QRYDTA | ENDQRYRM | TYPDEFNAM | SQLCARD
27 0.456756 [client IP] [DB2 server IP] DRDA RDBCMM
28 0.488311 [DB2 server IP] [client IP] DRDA ENDUOWRM | SQLCARD
29 0.498806 [client IP] [DB2 server IP] DRDA PRPSQLSTT | SQLATTR | SQLSTT | OPNQRY
30 0.630477 [DB2 server IP] [client IP] TCP 50000 > kpop [ACK] Seq=5157 Ack=1579 Win=65171 Len=0
31 0.788165 [DB2 server IP] [client IP] DRDA SQLDARD | OPNQRYRM | TYPDEFNAM | QRYDSC | QRYDTA
32 0.788203 [DB2 server IP] [client IP] DRDA ENDQRYRM
33 0.788225 [client IP] [DB2 server IP] TCP kpop > 50000 [ACK] Seq=1579 Ack=5815 Win=64380 [TCP CHECKSUM INCORRECT] Len=0
34 0.788648 [client IP] [DB2 server IP] DRDA RDBCMM
35 0.795951 [DB2 server IP] [client IP] DRDA ENDUOWRM | SQLCARD
36 0.807365 [client IP] [DB2 server IP] DRDA PRPSQLSTT | SQLATTR | SQLSTT | OPNQRY
37 0.838046 [DB2 server IP] [client IP] DRDA SQLDARD | OPNQRYRM | TYPDEFNAM | QRYDSC | QRYDTA | ENDQRYRM | TYPDEFNAM | SQLCARD
38 0.838328 [client IP] [DB2 server IP] DRDA RDBCMM
39 0.841866 [DB2 server IP] [client IP] DRDA ENDUOWRM | SQLCARD
40 0.973506 [client IP] [DB2 server IP] TCP kpop > 50000 [ACK] Seq=1906 Ack=6304 Win=65482 [TCP CHECKSUM INCORRECT] Len=0
</code></pre>
<p>BizTalk DB2 adapter</p>
<pre><code>No. Time Source Destination Protocol Info
1 0.000000 [client IP] [DB2 server IP] TCP 28165 > 50000 [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=8
2 0.002587 [DB2 server IP] [client IP] TCP 50000 > 28165 [SYN, ACK] Seq=0 Ack=1 Win=16384 Len=0 MSS=1460 WS=0
3 0.010146 [client IP] [DB2 server IP] TCP 28165 > 50000 [ACK] Seq=1 Ack=1 Win=65536 Len=0
4 0.019698 [client IP] [DB2 server IP] DRDA EXCSAT
5 0.020849 [DB2 server IP] [client IP] DRDA EXCSATRD
6 0.034699 [client IP] [DB2 server IP] DRDA ACCSEC
7 0.036584 [DB2 server IP] [client IP] DRDA ACCSECRD
8 0.042031 [client IP] [DB2 server IP] DRDA SECCHK
9 0.046350 [DB2 server IP] [client IP] DRDA SECCHKRM
10 0.046642 [DB2 server IP] [client IP] TCP 50000 > 28165 [FIN, ACK] Seq=160 Ack=200 Win=65336 Len=0
11 0.053787 [client IP] [DB2 server IP] TCP 28165 > 50000 [ACK] Seq=200 Ack=161 Win=65536 Len=0
12 0.056891 [client IP] [DB2 server IP] DRDA ACCRDB
13 0.058084 [DB2 server IP] [client IP] TCP 50000 > 28165 [RST, ACK] Seq=161 Ack=295 Win=0 Len=0
</code></pre>
<p>It is interesting to witness the .NET provider issue out various DRDA protocol packets within in a single TCP segment. The BizTalk adapter on the other hand, places only one protocol packet per TCP segment. I do not know why this is so. However, I at the moment think that is a red herring and the true difference causing the failure in authentication is in the DRDA data exchange. I do not know the DRDA protocol so will have to study it before I can make more sense of it.</p>
<p><strong>18 Sep 08:</strong></p>
<p>At this stage the problem is still not solved, as getting cooperation from the DB2 DBA team and help from Microsoft have been met with many obstacles.</p>
<p>What I do want to report is, I have observed perhaps one crucial difference between all the cases of successful connection versus the failed attempt:</p>
<p>The BizTalk DB2 adapter is underlyingly using <strong>Microsoft ODBC Driver for DB2</strong>. The other software tests that succeed make use of <strong>IBM DB2 ODBC DRIVER</strong> or <strong>IBM DB2 ODBC DRIVER – IBMCL1</strong>. The IBM driver's parameter configuration is different from Microsoft's driver. But we do not see any obviously critical difference that may lead to a failed authentication for the Microsoft driver.</p>
| <p>Why, it certainly took Microsoft long enough to explicitly confirm this:</p>
<p><strong>proxy connections via DB2Connect is not supported by BizTalk DB2 Adapter</strong></p>
<p>Since our customer's policy is to only access DB2 databases via DB2Connect, the adapter is out of the question.</p>
<p><strong>MORE BACKGROUND INFO</strong></p>
<p>The reason why the DB2 Adapter only works for a direct connection to a z/OS mainframe host, is due to legal restrictions. Technically it is possible to work a connection with DB2Connect, but IBM has made it a priorietary node and prevented other parties from legally establishing the correct DRDA sequence to connect to it.</p>
| <p>I've never used this adapter but myself, so I'm guessing, but maybe it's to do with the account that BizTalk is using to connect or your ports are not configured correctly.</p>
| 4,455 |
<p>These days, i came across a problem with Team System Unit Testing. I found that the automatically created accessor class ignores generic constraints - at least in the following case:</p>
<p>Assume you have the following class:</p>
<pre><code>namespace MyLibrary
{
public class MyClass
{
public Nullable<T> MyMethod<T>(string s) where T : struct
{
return (T)Enum.Parse(typeof(T), s, true);
}
}
}
</code></pre>
<p>If you want to test MyMethod, you can create a test project with the following test method:</p>
<pre><code>public enum TestEnum { Item1, Item2, Item3 }
[TestMethod()]
public void MyMethodTest()
{
MyClass c = new MyClass();
PrivateObject po = new PrivateObject(c);
MyClass_Accessor target = new MyClass_Accessor(po);
// The following line produces the following error:
// Unit Test Adapter threw exception: GenericArguments[0], 'T', on
// 'System.Nullable`1[T]' violates the constraint of type parameter 'T'..
TestEnum? e1 = target.MyMethod<TestEnum>("item2");
// The following line works great but does not work for testing private methods.
TestEnum? e2 = c.MyMethod<TestEnum>("item2");
}
</code></pre>
<p>Running the test will fail with the error mentioned in the comment of the snippet above. The problem is the accessor class created by Visual Studio. If you go into it, you will come up to the following code:</p>
<pre><code>namespace MyLibrary
{
[Shadowing("MyLibrary.MyClass")]
public class MyClass_Accessor : BaseShadow
{
protected static PrivateType m_privateType;
[Shadowing(".ctor@0")]
public MyClass_Accessor();
public MyClass_Accessor(PrivateObject __p1);
public static PrivateType ShadowedType { get; }
public static MyClass_Accessor AttachShadow(object __p1);
[Shadowing("MyMethod@1")]
public T? MyMethod(string s);
}
}
</code></pre>
<p>As you can see, there is no constraint for the generic type parameter of the MyMethod method.</p>
<p>Is that a bug? Is that by design? Who knows how to work around that problem?</p>
| <p>I vote bug. I don't see how this could be by design.</p>
| <p>Search for unit tests with generics on msdn. This is a known limitation. Vote for a resolution on Microsoft Connect, as it is definately needs resolving.</p>
| 9,042 |
<p>I'm building a RESTful web service which has multiple URIs for one of its resources, because there is more than one unique identifier. Should the server respond to a GET request for an alternate URI by returning the resource, or should I send an HTTP 3xx redirect to the canonical URI? Is <em>HTTP 303 (see also)</em> the most appropriate redirect?</p>
<p>Clarification: the HTTP specification makes it clear that the choice of redirect depends on which URI future requests should use. In my application, the 'canonical' URI is the most stable of the alternatives; an alternative URI will always direct to same canonical URI, or become invalid.</p>
| <p>I'd personally plump for returning the resource rather than faffing with a redirect, although I suspect that's only because my subcoscious is telling me redirects are slower.</p>
<p>However, if you were to decide to use a redirect I'd think a 302 or 307 might be more appropiate than a 303, although the <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html" rel="nofollow noreferrer">w3.org</a> has details of the different redirect codes you could use.</p>
| <p>The answer from Ubiguchi had what I needed, except that I now think a redirect is the way to go, via the link to the HTTP 1.1 specifiction section on response codes. It turns out that I actually need a 301 redirect because the URI I'm redirecting to is more 'correct' and stable, and should therefore be used for future requests.</p>
| 5,169 |
<p>Suppose I have a stringbuilder in C# that does this:</p>
<pre><code>StringBuilder sb = new StringBuilder();
string cat = "cat";
sb.Append("the ").Append(cat).(" in the hat");
string s = sb.ToString();
</code></pre>
<p>would that be as efficient or any more efficient as having:</p>
<pre><code>string cat = "cat";
string s = String.Format("The {0} in the hat", cat);
</code></pre>
<p>If so, why?</p>
<p><strong>EDIT</strong> </p>
<p>After some interesting answers, I realised I probably should have been a little clearer in what I was asking. I wasn't so much asking for which was quicker at concatenating a string, but which is quicker at <strong><em>injecting</em></strong> one string into another.</p>
<p>In both cases above I want to inject one or more strings into the middle of a predefined template string.</p>
<p>Sorry for the confusion</p>
| <p><strong>NOTE:</strong> This answer was written when .NET 2.0 was the current version. This may no longer apply to later versions.</p>
<p><code>String.Format</code> uses a <code>StringBuilder</code> internally:</p>
<pre><code>public static string Format(IFormatProvider provider, string format, params object[] args)
{
if ((format == null) || (args == null))
{
throw new ArgumentNullException((format == null) ? "format" : "args");
}
StringBuilder builder = new StringBuilder(format.Length + (args.Length * 8));
builder.AppendFormat(provider, format, args);
return builder.ToString();
}
</code></pre>
<p>The above code is a snippet from mscorlib, so the question becomes "is <code>StringBuilder.Append()</code> faster than <code>StringBuilder.AppendFormat()</code>"? </p>
<p>Without benchmarking I'd probably say that the code sample above would run more quickly using <code>.Append()</code>. But it's a guess, try benchmarking and/or profiling the two to get a proper comparison.</p>
<p>This chap, Jerry Dixon, did some benchmarking:</p>
<blockquote>
<p><a href="http://jdixon.dotnetdevelopersjournal.com/string_concatenation_stringbuilder_and_stringformat.htm" rel="noreferrer">http://jdixon.dotnetdevelopersjournal.com/string_concatenation_stringbuilder_and_stringformat.htm</a></p>
</blockquote>
<p><strong>Updated:</strong></p>
<p>Sadly the link above has since died. However there's still a copy on the Way Back Machine:</p>
<blockquote>
<p><a href="http://web.archive.org/web/20090417100252/http://jdixon.dotnetdevelopersjournal.com/string_concatenation_stringbuilder_and_stringformat.htm" rel="noreferrer">http://web.archive.org/web/20090417100252/http://jdixon.dotnetdevelopersjournal.com/string_concatenation_stringbuilder_and_stringformat.htm</a></p>
</blockquote>
<p>At the end of the day it depends whether your string formatting is going to be called repetitively, i.e. you're doing some serious text processing over 100's of megabytes of text, or whether it's being called when a user clicks a button now and again. Unless you're doing some huge batch processing job I'd stick with String.Format, it aids code readability. If you suspect a perf bottleneck then stick a profiler on your code and see where it really is.</p>
| <p>I would suggest not, since String.Format was not designed for concatenation, it was design for formatting the output of various inputs such as a date.</p>
<pre><code>String s = String.Format("Today is {0:dd-MMM-yyyy}.", DateTime.Today);
</code></pre>
| 2,849 |
<p>I know that 3D printed parts can be coated in metal by painting them with conductive paint (graphite or copper seems to be usual) and then electroplating them in a commercial copper or nickel bath. The disadvantage of this process is that it does not coat insides very well, because those are not reached by the electric field.</p>
<p>I know that in the industry for plating ABS-parts with chrome and other metals, there is a process used where first the ABS is etched, then seeded with electroless catalytic palladium and then there are various options, for example electroless nickel or chrome.</p>
<p>I tried to etch both FDM printed ABS and ABS-like resin prints in NaOH, then after rinsing, dropped them in a commercial palladium activator and, after rinsing again, then in an electroless nickel bath, without any effect.</p>
<p>Does anyone have an idea on how to metal plate 3D printed parts (by electroless plating, meaning no electricity involved) and can shed some light on the chemicals used? I would prefer to mix them myself.</p>
| <p>Good day, I have experience in applying electroless nickel on difficult 3D parts, in your case, it's exceedingly difficult to do it directly to the printed piece. You will need to seal the piece with a more benign coating that will accept the palladium activator. Also, a sensitizer before the activator is needed.</p>
<p>Try this.</p>
<p>Try your original procedure, but this time use just before the palladium activator, a stannous chloride solution (20 g per 1 L of deionized Water and 10 ml of muriatic acid), submerge the piece in the solution for 1 minute, rinse in deionized water and then place the piece on the palladium activator for 1 minute (don't rinse the activator) and then place the piece directly on the electroless nickel plating bath.</p>
<p>If that still does not produce good results, then you will need to seal the piece with a two-part epoxy sealer and do the process again.</p>
| <p>You can use semiconductor processes like evaporation, sputtering, and chemical vapor deposition. These are expensive. Sputtering is probably the most feasable but it will have trouble with shadows.</p>
| 2,192 |
<p>What's the <strong>easiest</strong>, <strong>tersest</strong>, and most <strong>flexible</strong> method or library for parsing Python command line arguments?</p>
| <p><strong>This answer suggests <code>optparse</code> which is appropriate for older Python versions. For Python 2.7 and above, <code>argparse</code> replaces <code>optparse</code>. See <a href="https://stackoverflow.com/questions/3217673/why-use-argparse-rather-than-optparse">this answer</a> for more information.</strong></p>
<p>As other people pointed out, you are better off going with optparse over getopt. getopt is pretty much a one-to-one mapping of the standard getopt(3) C library functions, and not very easy to use.</p>
<p>optparse, while being a bit more verbose, is much better structured and simpler to extend later on.</p>
<p>Here's a typical line to add an option to your parser:</p>
<pre><code>parser.add_option('-q', '--query',
action="store", dest="query",
help="query string", default="spam")
</code></pre>
<p>It pretty much speaks for itself; at processing time, it will accept -q or --query as options, store the argument in an attribute called query and has a default value if you don't specify it. It is also self-documenting in that you declare the help argument (which will be used when run with -h/--help) right there with the option.</p>
<p>Usually you parse your arguments with:</p>
<pre><code>options, args = parser.parse_args()
</code></pre>
<p>This will, by default, parse the standard arguments passed to the script (sys.argv[1:])</p>
<p>options.query will then be set to the value you passed to the script.</p>
<p>You create a parser simply by doing</p>
<pre><code>parser = optparse.OptionParser()
</code></pre>
<p>These are all the basics you need. Here's a complete Python script that shows this:</p>
<pre><code>import optparse
parser = optparse.OptionParser()
parser.add_option('-q', '--query',
action="store", dest="query",
help="query string", default="spam")
options, args = parser.parse_args()
print 'Query string:', options.query
</code></pre>
<p>5 lines of python that show you the basics.</p>
<p>Save it in sample.py, and run it once with</p>
<pre><code>python sample.py
</code></pre>
<p>and once with</p>
<pre><code>python sample.py --query myquery
</code></pre>
<p>Beyond that, you will find that optparse is very easy to extend.
In one of my projects, I created a Command class which allows you to nest subcommands in a command tree easily. It uses optparse heavily to chain commands together. It's not something I can easily explain in a few lines, but feel free to <a href="https://thomas.apestaart.org/moap/trac/browser/trunk/moap/extern/command/command.py" rel="noreferrer">browse around in my repository</a> for the main class, as well as <a href="https://thomas.apestaart.org/moap/trac/browser/trunk/moap/command/doap.py" rel="noreferrer">a class that uses it and the option parser</a></p>
| <p>I extended Erco's approach to allow for required positional arguments and for optional arguments. These should precede the -d, -v etc. arguments.</p>
<p>Positional and optional arguments can be retrieved with PosArg(i) and OptArg(i, default) respectively.
When an optional argument is found the start position of searching for options (e.g. -i) is moved 1 ahead to avoid causing an 'unexpected' fatal.</p>
<pre><code>import os,sys
def HelpAndExit():
print("<<your help output goes here>>")
sys.exit(1)
def Fatal(msg):
sys.stderr.write("%s: %s\n" % (os.path.basename(sys.argv[0]), msg))
sys.exit(1)
def NextArg(i):
'''Return the next command line argument (if there is one)'''
if ((i+1) >= len(sys.argv)):
Fatal("'%s' expected an argument" % sys.argv[i])
return(1, sys.argv[i+1])
def PosArg(i):
'''Return positional argument'''
if i >= len(sys.argv):
Fatal("'%s' expected an argument" % sys.argv[i])
return sys.argv[i]
def OptArg(i, default):
'''Return optional argument (if there is one)'''
if i >= len(sys.argv):
Fatal("'%s' expected an argument" % sys.argv[i])
if sys.argv[i][:1] != '-':
return True, sys.argv[i]
else:
return False, default
### MAIN
if __name__=='__main__':
verbose = 0
debug = 0
infile = "infile"
outfile = "outfile"
options_start = 3
# --- Parse two positional parameters ---
n1 = int(PosArg(1))
n2 = int(PosArg(2))
# --- Parse an optional parameters ---
present, a3 = OptArg(3,50)
n3 = int(a3)
options_start += int(present)
# --- Parse rest of command line ---
skip = 0
for i in range(options_start, len(sys.argv)):
if not skip:
if sys.argv[i][:2] == "-d": debug ^= 1
elif sys.argv[i][:2] == "-v": verbose ^= 1
elif sys.argv[i][:2] == "-i": (skip,infile) = NextArg(i)
elif sys.argv[i][:2] == "-o": (skip,outfile) = NextArg(i)
elif sys.argv[i][:2] == "-h": HelpAndExit()
elif sys.argv[i][:1] == "-": Fatal("'%s' unknown argument" % sys.argv[i])
else: Fatal("'%s' unexpected" % sys.argv[i])
else: skip = 0
print("Number 1 = %d" % n1)
print("Number 2 = %d" % n2)
print("Number 3 = %d" % n3)
print("Debug = %d" % debug)
print("verbose = %d" % verbose)
print("infile = %s" % infile)
print("outfile = %s" % outfile)
</code></pre>
| 4,042 |
<p>Does anyone know a good way to do this? I need to have simple forms that submit to email without writing a lot of code. These forms will be hosted in content-viewer web parts or similar in MOSS 2007. I'd like to avoid using InfoPath.</p>
| <p>You could use a list which would give you the input form.</p>
<p>It depends on a) whether people should be able to see each other's submissions and b) who the e-mail should go to.</p>
<p>You could set an alert (Actions -> Alert Me) to send an e-mail to a person/people when a new item is added to the list.</p>
<p>In Settings -> List Settings -> Advanced Settings, there's the options for which items a user can see/edit. Alerts however cannot be set on lists where users can only see their own items. In this case, I would use a simple workflow to send the e-mail. I've only worked with MOSS 2007 and SharePoint Designer though - I'm not sure about WSS.</p>
| <p>With the sharepoint sdk, you can create your own webparts. If you add them to the GAC you can include them on your sharepoint site. You'd of course have to build a webpart for emailing though.</p>
| 8,917 |
<p>Printer: Ender 3 Pro - Direct drive, BLTouch, stock magnetic print bed surface<br />
Material: PLA - multiple brands</p>
<p>Slicer Settings:</p>
<ul>
<li>Layer Height .2 mm</li>
<li>Initial Layer Height .1 mm</li>
<li>Line Width .4 mm (with .4 mm nozzle) and also tried .39 mm</li>
<li>Wall thickness 1.2 mm (3 lines)</li>
<li>Hot End 210 °C</li>
<li>Bed 60 °C</li>
<li>Print Speed tried between 40 and 100 mm/s</li>
<li>Retraction Distance 6 mm speed 25 mm/s</li>
<li>Print Cooling- Initial 3 layers 0 %, then 100 %, also tried 50 %</li>
</ul>
<p>I designed this in Tinkercad and sliced it with Cura for the Ender 3 Pro. When I print this specific shape at this size the outer wall does not print properly, there are large gaps and it does not bond to the rest of the model so it just flakes off as soon as you handle it. I can print other shapes fine- see the color swap example that was printed with the same settings successfully before AND after this series of failed prints of the same design. Sometimes I get a clean first 15 layers then it goes to crap, and cleans up for the last 10% or so. If I scale it down to 50 % it prints fine, but it will NOT print properly at full scale.</p>
<p>The infill and inner walls seem fine, but the walls are definitely not air-tight with what is left.</p>
<p>Looking for more things to try and troubleshoot, so please send me your ideas!</p>
<p>Troubleshooting steps so far:</p>
<ul>
<li>Re-downloaded the files and resliced with all new settings</li>
<li>Adjusted the print speed, wall line width and fan speeds</li>
<li>Tried multiple PLA types and brands</li>
</ul>
<p><a href="https://i.stack.imgur.com/4D9iC.jpg" rel="nofollow noreferrer" title="Printed model with printing errors"><img src="https://i.stack.imgur.com/4D9iC.jpg" alt="Printed model with printing errors" title="Printed model with printing errors" /></a></p>
<p><a href="https://i.stack.imgur.com/BNu8J.jpg" rel="nofollow noreferrer" title="Printed model without printing errors"><img src="https://i.stack.imgur.com/BNu8J.jpg" alt="Printed model without printing errors" title="Printed model without printing errors" /></a></p>
| <p>Assuming you do have a direct drive system as described.
Is this correct?</p>
<blockquote>
<p>Retraction Distance 6 mm speed 25 mm/s</p>
</blockquote>
<p>A retraction distance of 6 mm for direct a drive is huge and could easily be pulling the filament out so far it might become problematic. Most of my direct drive printers are 0.4 to 1 mm. In addition the 25 mm/s appears to me to be on the low side, though it would not surprise me if it's highly extruder specific. Most of my retraction speeds are 60 to 90 mm/s.
Faster retraction speeds can actually be more effective at stringing reduction then adding distance as. Faster travel will also reduce stringing as the nozzle has less time to ooze.</p>
<p>Are you incorrectly a using a Bowden profile?</p>
| <p>You appear to have severe underextrusion. It may be that you are trying to melt plastic faster than your printer can physically accomplish which would explain why it starts good and then the nozzle gets cold and then as the layers get small it comes good.
That doesn't explain why reducing the print speed doesn't help. Double check that it is actually slower.
I have had similar issues with cura that were fixed by altering the configuration, like tilting the model slightly or changing print orientation.</p>
| 2,000 |
<p>Is there any way in IIS to map requests to a particular URL with no extension to a given application.</p>
<p>For example, in trying to port something from a Java servlet, you might have a URL like this...</p>
<p><a href="http://[server]/MyApp/HomePage?some=parameter" rel="nofollow noreferrer">http://[server]/MyApp/HomePage?some=parameter</a></p>
<p>Ideally I'd like to be able to map everything under MyApp to a particular application, but failing that, any suggestions about how to achieve the same effect would be really helpful.</p>
| <p>With AIR on Linux, it is easy to write to stdout, since the process can see its own file descriptors as files in /dev.</p>
<p>For stdout, open <code>/dev/fd/1</code> or <code>/dev/stdout</code> as a <code>FileStream</code>, then write to that.</p>
<p>Example:</p>
<pre><code>var stdout : FileStream = new FileStream();
stdout.open(new File("/dev/fd/1"), FileMode.WRITE);
stdout.writeUTFBytes("test\n");
stdout.close();
</code></pre>
<p><strong>Note:</strong> See <a href="https://stackoverflow.com/questions/5552277/when-to-use-writeutf-and-writeutfbytes-in-bytearray-of-as3">this answer</a> for the difference between <code>writeUTF()</code> and <code>writeUTFBytes()</code> - the latter will avoid garbled output on stdout.</p>
| <p>If you are using a debug Flash Player, you can have the Flash Player log trace messages to a file on your system.</p>
<p>If you want real time messages, then you could tail the file.</p>
<p>More info:</p>
<p><a href="http://blog.flexexamples.com/2007/08/26/debugging-flex-applications-with-mmcfg-and-flashlogtxt/" rel="nofollow noreferrer">http://blog.flexexamples.com/2007/08/26/debugging-flex-applications-with-mmcfg-and-flashlogtxt/</a></p>
<p>mike chambers</p>
<p>[email protected]</p>
| 5,959 |
<p>My problem is that I have used a 3D printing machine from the University and found out that the cover for the car was not smooth even after using sanding paper and painting it.</p>
<p>What material would work best to print the cover of the Cyber truck. I want it to be light and smooth.</p>
<p>I have to print it from any online companies that have this service here in Germany.</p>
| <p>I have 3D printed models which were then sanded using progressively finer grades of sandpaper, terminating with wet sanding using micromesh to 12000 grit. The result was smooth and shining without any coating applied.</p>
<p>If your original results were not acceptable, the process may have been flawed and should be re-considered for technique.</p>
<p>For your purposes, as a body for a radio controlled vehicle, you'll want to consider something that can manage an impact reasonably well. ABS is going to be less expensive and provide some energy absorption but will have layer lines that require sanding and finishing. Layer thickness plays a substantial part in providing for good results and a smooth finish. I used 0.100 mm layers to get optimum smoothness.</p>
<p>You could request your model to be created in nylon using the SLS method, but the surface will be granular and would also require sanding to accomplish a smooth finish.</p>
<p>SLA or MSLA resin printed models will provide a very smooth surface, but the material is brittle and may crack during "on-road" use. You may find a printing service which offers to create using a more flexible resin, but you'd have to request that or confirm the selection when placing the order.</p>
| <h2>Choice of Material</h2>
<p>PLA is an obvious choice, but it has drawbacks compared with ABS.</p>
<ol>
<li>PLA is more brittle than ABS.</li>
<li>PLA softens at a lower temperature than ABS. </li>
<li>PLA is not treatable with acetone for vapor smoothing.</li>
<li>PLA can not be glued
with (most) solvent-based adhesives.</li>
</ol>
<p>I would consider ABS or ASA to be good choices for an RC-car body.</p>
<h2>Getting Smooth Surfaces</h2>
<p>To get a really smooth surface, after printing with thin layers and good print settings to minimize strings and blobs, you will want to treat the surface. The two most common techniques are sanding and vapor smoothing.</p>
<h2>Wet Sanding is Great. Dry Sanding is Not.</h2>
<p>Sanding is a great smoothing technique for PLA and even for ABS, but one must be careful. In addition to some of the sand paper grains being aligned as proper cutters to remove material, many grains are uselessly aligned and simply contribute to friction. The plastics used in FFF printing are, by definition, thermoplastics and will melt. One can easily soften and even melt plastic with dry sanding.</p>
<p>When the plastic softens, it can form little balls which dig into the surface, or stick to the surface. These hurt the surface finish like snowmen mar a field of freshly fallen snow.</p>
<p>Any sanding should be done wet, with wet-rated paper. Work up through the grits. Lower grit abrasive removes more material so that the surface can be "even". Lower grit lets you sand out the layer lines quickly.</p>
<h2>Use a Sequence of Grits</h2>
<p>Higher grit papers remove the scratch marks of the previous grit. </p>
<p>Generally, I go up by about 50% of grit number at each change.</p>
<p>When I am removing a lot of plastic and want to end with a glossy surface, I go through this sequence: 36->80->120->180->220->330->400->500->1000->1500->2500->3600->5000. Yes, I know that there are some big jumps there, but I haven't happened to stock papers at intermediate points, and it works for me. Depending on how smooth the surface already is, or how much material I must remove, I start further up the progressing. For 3D printed objects, I start at 220. The last three are critical. </p>
<h2>Consider Adding Vapor Finishing to you Toolbox</h2>
<p>Vapor finishing of ABS can give some very fine and glossy results. It is worth trying, although be aware that acetone mist is very flammable. If you are heating the acetone, treat it as life-threatening and possibly explosive.</p>
<p>If you use vapor finishing, consider that the acetone goes into the ABS. A warm soak or vacuum degassing could help remove it. You don't want the hood of your RC car to start bubbling in the sun as the acetone is motivated to escape.</p>
| 1,596 |
<p>I have been tasked with going through a number of ColdFusion sites that have recently been the subject of a rather nasty SQL Injection attack. Basically my work involves adding <code><cfqueryparam</code>> tags to all of the inline sql. For the most part I've got it down, but can anybody tell me how to use cfqueryparam with the LIKE operator?</p>
<p>If my query looks like this:</p>
<pre><code>select * from Foo where name like '%Bob%'
</code></pre>
<p>what should my <code><cfqueryparam</code>> tag look like?</p>
| <p>@Joel, I have to disagree.</p>
<pre><code>select a,b,c
from Foo
where name like <cfqueryparam cfsqltype="columnType" value="%#variables.someName#%" />
</code></pre>
<ol>
<li><p>Never suggest to someone that they should "select star." Bad form! Even for an example! (Even copied from the question!)</p></li>
<li><p>The query is pre-compiled and you should include the wild card character(s) as part of the parameter being passed to the query. This format is more readable and will run more efficiently.</p></li>
<li><p>When doing string concatenation, use the ampersand operator (&), not the plus sign. Technically, in most cases, plus will work just fine... until you throw a NumberFormat() in the middle of the string and start wondering why you're being told that you're not passing a valid number when you've checked and you are.</p></li>
</ol>
| <pre><code>select a,b,c
from Foo
where name like <cfqueryparam cfsqltype="cf_sql_varchar" value="%Bob%" />;
</code></pre>
| 6,872 |
<p>I need to enumerate all the user defined types created in a <code>SQL Server</code> database with <code>CREATE TYPE</code>, and/or find out whether they have already been defined.</p>
<p>With tables or stored procedures I'd do something like this:</p>
<pre><code>if exists (select * from dbo.sysobjects where name='foobar' and xtype='U')
drop table foobar
</code></pre>
<p>However I can't find the equivalent (or a suitable alternative) for user defined types! I definitely can't see them anywhere in <code>sysobjects</code>. </p>
<p>Can anyone enlighten me?</p>
| <p>Types and UDTs don't appear in sys.objects.
You should be able to get what you're looking for with the following:</p>
<pre><code>select * from sys.types
where is_user_defined = 1
</code></pre>
| <p>To expand on jwolly2's answer, here's how you get a list of definitions including the standard data type:</p>
<pre><code>-- User Defined Type definitions TP 20180124
select t1.name, t2.name, t1.precision, t1.scale, t1.max_length as bytes, t1.is_nullable
from sys.types t1
join sys.types t2 on t2.system_type_id = t1.system_type_id and t2.is_user_defined = 0
where t1.is_user_defined = 1 and t2.name <> 'sysname'
order by t1.name
</code></pre>
| 7,808 |
<p>Is there an api to bring the vista side bar to the front (Win+Space) programatically and to do the reverse (send it to the back ground).</p>
| <p>Probably using SetWindowPos you can change it to be placed the top / bottom of the z-order or even as the top-most window. You would need to find the handle to the sidebar using FindWindow or an application like WinSpy.</p>
<p>But after that something like.</p>
<p>Sets the window on top, but not top most.</p>
<pre><code>SetWindowPos(sidebarHandle, HWND_TOP, 0, 0, 0, 0, SWP_NOMOVE | SWP_NORESIZE);
</code></pre>
<p>Sets the window at the bottom.</p>
<pre><code>SetWindowPos(sidebarHandle, HWND_BOTTOM, 0, 0, 0, 0, SWP_NOMOVE | SWP_NORESIZE);
</code></pre>
<p>This is my best guess on achieving what you asked, hopefully it helps.</p>
| <p>You probably shouldn't do it at all, since such action may annoy the user when executed at the wrong time (95% of cases*), just like stealing focus with a "Yes/No" prompt.</p>
<p>Unless your product's task is to toggle the sidebar of course. ;)</p>
<p>There's no official API for that anyway.</p>
<p>*Purely hypothetical figure</p>
| 9,722 |
<p>Ok, here's a very short and to the point question. When trying to import a virtual PC 2004 Windows 2003 Server VM in VM Workstation 6.0.2 I'm getting an error 'unable to determine guest operating system'. Soo how to fix?</p>
| <p>From <a href="http://www.vi411.org/2007/03/08/vmware-converter-unable-to-determine-guest-operating-system.html" rel="nofollow noreferrer">here</a>:</p>
<ol>
<li><p>Make sure that that the VM is not currently running in VMware Server.</p></li>
<li><p>Make sure that VMware Server does not have a lock on the VM’s files. You have have to stop all VMware Server Services and/or reboot the (VMWare) server.</p></li>
<li><p>Make sure you have appropriate permissions to the VM’s files.</p></li>
</ol>
| <p>This is a fairly generic error from VMware Converter so I would try the following:</p>
<p>Step 1. Make sure you are running the latest version of VMware Converter. Updates seem to come pretty often for this tool.</p>
<p>Step 2. Check the VMware Converter log file. More often than not you will find the source of your problem here.</p>
| 5,997 |
<p>ASP.NET server-side controls postback to their own page. This makes cases where you want to redirect a user to an external page, but need to post to that page for some reason (for authentication, for instance) a pain.</p>
<p>An <code>HttpWebRequest</code> works great if you don't want to redirect, and JavaScript is fine in some cases, but can get tricky if you really do need the server-side code to get the data together for the post.</p>
<p>So how do you both post to an external URL and redirect the user to the result from your ASP.NET codebehind code?</p>
| <p>Here's how I solved this problem today. I started from <a href="http://www.c-sharpcorner.com/UploadFile/desaijm/ASP.NetPostURL11282005005516AM/ASP.NetPostURL.aspx" rel="nofollow noreferrer">this article</a> on C# Corner, but found the example - while technically sound - a little incomplete. Everything he said was right, but I needed to hit a few external sites to piece this together to work exactly as I wanted.</p>
<p>It didn't help that the user was not technically submitting a form at all; they were clicking a link to go to our support center, but to log them in an http post had to be made to the support center's site.</p>
<p>This solution involves using <code>HttpContext.Current.Response.Write()</code> to write the data for the form, then using a bit of Javascript on the <code><body onload=""></code> method to submit the form to the proper URL.</p>
<p>When the user clicks on the Support Center link, the following method is called to write the response and redirect the user:</p>
<pre><code>public static void PassthroughAuthentication()
{
System.Web.HttpContext.Current.Response.Write("<body
onload=document.forms[0].submit();window.location=\"Home.aspx\";>");
System.Web.HttpContext.Current.Response.Write("<form name=\"Form\"
target=_blank method=post
action=\"https://external-url.com/security.asp\">");
System.Web.HttpContext.Current.Response.Write(string.Format("<input
type=hidden name=\"cFName\" value=\"{0}\">", "Username"));
System.Web.HttpContext.Current.Response.Write("</form>");
System.Web.HttpContext.Current.Response.Write("</body>");
}
</code></pre>
<p>The key to this method is in that onload bit of Javascript, which , when the body of the page loads, submits the form and then redirects the user back to my own Home page. The reason for that bit of hoodoo is that I'm launching the external site in a new window, but don't want the user to resubmit the hidden form if they refresh the page. Plus that hidden form pushed the page down a few pixels which got on my nerves.</p>
<p>I'd be very interested in any cleaner ideas anyone has on this one.</p>
<p>Eric Sipple</p>
| <p>If you're using ASP.NET 2.0, you can do this with <a href="http://msdn.microsoft.com/en-us/library/ms178139.aspx" rel="nofollow noreferrer">cross-page posting</a>.</p>
<p>Edit: I missed the fact that you're asking about an <em>external</em> page. For that I think you'd need to have your ASP.NET page gen up an HTML form whose action is set to the remote URL and method is set to POST. (Using cross-page posting, this could even be a different page with no UI, only hidden form elements.) Then add a bit of javascript to submit the form as soon as the postback result was received on the client.</p>
| 2,715 |
<p>I'm experimenting with Linq and am having trouble figuring out grouping. I've gone through several tutorials but for some reason can't figure this out.</p>
<p>As an example, say I have a table (SiteStats) with multiple website IDs that stores a count of how many visitors by type have accessed each site in total and for the past 30 days.</p>
<pre><code>╔════════╦═════════════╦════════╦══════╗
║ SiteId ║ VisitorType ║ Last30 ║ Total║
╠════════╬═════════════╬════════╬══════╣
║ 1 ║ 1 ║ 10 ║ 100 ║
║ 1 ║ 2 ║ 40 ║ 140 ║
║ 2 ║ 1 ║ 20 ║ 180 ║
╚════════╩═════════════╩════════╩══════╝
</code></pre>
<p>In SQL, I can easily get the counts for SiteID 1 with the following:</p>
<pre><code>SELECT SiteId,
SUM(Last30) AS Last30Sum
FROM Sites
WHERE SiteId = 1
GROUP BY SiteId
</code></pre>
<p>and should get a row like...</p>
<pre><code>╔════════╦════════════╗
║ SiteId ║ Last30Total║
╠════════╬════════════╣
║ 1 ║ 50 ║
╚════════╩════════════╝
</code></pre>
<p>However I'm not sure how to get this result using Linq. I've tried:</p>
<pre><code>var statsRecord = from ss in db.SiteStats
where ss.SiteId == siteId
group ss by ss.SiteId into ss
select ss;
</code></pre>
<p>but I'm not able to get back the total with something like <code>statsRecord.Last30</code></p>
<p>Can someone please let me know where I'm going wrong? Any help is appreciated.</p>
| <p>Actually, although Thomas' code will work, it is more succint to use a lambda expression:</p>
<pre><code>var totals =
from s in sites
group s by s.SiteID into grouped
select new
{
SiteID = grouped.Key,
Last30Sum = grouped.Sum( s => s.Last30 )
};
</code></pre>
<p>which uses the Sum extension method without the need for a nested LINQ operation.</p>
<p>as per the LINQ 101 examples - <a href="http://msdn.microsoft.com/en-us/vcsharp/aa336747.aspx#sumGrouped" rel="noreferrer">http://msdn.microsoft.com/en-us/vcsharp/aa336747.aspx#sumGrouped</a></p>
| <p>Easiest way for me to illustrate is using in-memory objects so it's clear what's happening. LINQ to SQL should be able to take that same LINQ query and translate it into appropriate SQL.</p>
<pre><code>public class Site
{
static void Main()
{
List<Site> sites = new List<Site>()
{
new Site() { SiteID = 1, VisitorType = 1, Last30 = 10, Total = 100, },
new Site() { SiteID = 1, VisitorType = 2, Last30 = 40, Total = 140, },
new Site() { SiteID = 2, VisitorType = 1, Last30 = 20, Total = 180, },
};
var totals =
from s in sites
group s by s.SiteID into grouped
select new
{
SiteID = grouped.Key,
Last30Sum =
(from value in grouped
select value.Last30).Sum(),
};
foreach (var total in totals)
{
Console.WriteLine("Site: {0}, Last30Sum: {1}", total.SiteID, total.Last30Sum);
}
}
public int SiteID { get; set; }
public int VisitorType { get; set; }
public int Last30 { get; set; }
public int Total { get; set; }
}
</code></pre>
| 5,527 |
<p>So I have some old filament that I originally got for a 3D pen. The problem is it's unlabeled and I haven't been able to find anything that might help me distinguish whether it's PLA or ABS. The bag it all came in says that wherever this filament came from only makes PLA and ABS so it's got to be on of those two.</p>
<p>I have a roll of PLA in my 3D printer right now, but I can't tell if it's the same as the filament I have for the 3D pen. It's been a while since I've used the 3D pen, but I do remember whenever you used it, it would produce a very very bad smell. I've also noticed that the filament seems to be more flexible that the PLA in my machine. This makes me think it could be ABS, because the PLA smells far better than what I remember the 3D pen smelling like, and it's more flexible.</p>
<p>I also don't really want to do any heat tests or anything on the filament, so if the smell and flexibility is enough to determine which filament it is, could anyone tell me?</p>
| <p>Mick's suggestion is a good one. PLA may shed some color in acetone, but ABS will dissolve completely in a suitable amount of time. If you have dark filament, you can test by flexing the filament until it breaks. ABS will sometimes/often/usually fatigue with a white break line, while PLA does not exhibit this tendency as much.</p>
<p>PLA has a somewhat sweet smell, which may be the corn sugars burning off, while ABS has a much more chemical-like odor.</p>
<p>Not doing heat testing does limit your options.</p>
| <p>Just burn it and check flame color.</p>
<p>I know you mentioned that you would like to avoid heat test, but this method is much faster and easier then other techniques.</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=bNKno20GMMQ" rel="nofollow noreferrer">3D printing filament burn test</a></li>
<li><a href="https://www.youtube.com/watch?v=a7l0Aaysy_8" rel="nofollow noreferrer">Do 3D Prints Catch Fire? ABS / PLA / PETG Burn Test - Episode 1</a></li>
</ul>
| 1,750 |
<p>I recently became curious about the Line Width setting in Cura and why one might change it if they aren't using different size nozzle.</p>
<p>Since I've gotten my Ender 3, I've always kept the line width equal to my nozzle size (<em>0.4 mm</em>). I've <a href="https://www.reddit.com/r/3Dprinting/comments/5zxj1z/should_line_width_always_nozzle_size/" rel="noreferrer">looked around a bit</a>, and it seems like most people actually set their line widths to be higher, depending upon who you ask anywhere from 120 - 150 % nozzle diameter. </p>
<p>Why is this? They mention that it helps with print adhesion, but why? Shouldn't a 0.4 mm nozzle create a line of plastic 0.4 mm wide, necessitating a line spacing of 0.4 mm?</p>
| <p>There are several things at play that can make a wider line nice to have:</p>
<h1>First layer adhesion</h1>
<p>Due to some filaments having serious struggle to get the first line or layer stuck to the bed, it can be an easy fix to just increase the line width, generating a bigger Adhesive Force <span class="math-container">$F_a\propto A(l,w)$</span>, where A is the area covered by the line, and thus simply <span class="math-container">$A=l*w$</span> with length l and width w of the line. So, a wider line means better <em>initial</em> adhesion and <em>can</em> lead to less failed prints in layer 1.</p>
<h1>Plastic Goo</h1>
<p>Plastics under heat behave in certain ways: they turn into a gooey substance that expands. This is also the reason why prints shrink a little as they cool. Now, if we press the plastic onto the bed with more force (as we force more plastic through than before to go from 0.4 mm to 0.5 mm) for the first time, we have a roughly flat area. The extra filament will make a wider line. The slicher can account for that, and does.</p>
<p>Now, next layer up: Where does the extra material go now? Plastic goo has one property that is very interesting: it tries to shrink its surface as much as possible. Heat a short piece with an airgun and it gets a little beady. But on the other hand, it comes hot enough from the nozzle to melt a tiny surface area of the already built layers, which is how layer bonding works in the first place. But our goopy plastic finds the layer below not exactly flat like the first layer found its lower surface, it finds a shape of ridges and valley. Taking into account that it wants to have the least surface to non-plastic (=air) and slightly cross bonds with the print, it will fill these nooks and crevices <em>inside</em> the print a tiny little better, as the increased force we use to push it out also increased the speed at which it expands to them: we reduce the time a tiny bit to reach there. How does it matter?</p>
<p><a href="https://i.ytimg.com/vi/0XcMz05zejI/maxresdefault.jpg" rel="noreferrer"><img src="https://i.ytimg.com/vi/0XcMz05zejI/maxresdefault.jpg" alt="a thermal image of a 3D print" /></a></p>
<p>Well, heat transfer bases, roughly speaking, on a formula like this: <span class="math-container">$Q = mc\Delta T$</span> Q is the thermal energy of the object, m the mass of the object, c its specific heat capacity and T the temperature, ΔT the temperature change. But we don't have a homogenous object, we got pretty much a heat distribution with touching zones of different heat. The actual formula for the heat transfer inside the object is a long mess containing stuff like the gradient <span class="math-container">$\text{grad}T$</span>, thermal conductivities, and integrals, but what matters is the result: The faster-expanding line of filament loses a little less thermal energy to its surroundings than the less forceful extruded line, which can increase the bonding between the two as the temperature on several fronts:</p>
<ul>
<li>it enters the crevices further before reverting from goo to solid, leading to better adhesion for more surface.</li>
<li>it contains more thermal energy that can and will get transmitted to the layer below and has a bigger surface area, so it can increase the zone thickness that gets remelted a tiny bit, increasing the layer bonding strength a little.</li>
</ul>
<p>This <em>can</em> result in a problem though: if you don't give the printed lines enough time to cool, it can lead to the material to accumulate heat more and more, leading to the whole thing to melt and turn into goop. An easy fix to this side problem is minimum layer time. But that would be only tangential to the original question, so look for example at the question <a href="https://3dprinting.stackexchange.com/questions/4975/printing-starts-well-but-then-it-breaks-down-anet-a8/4976#4976">here</a> or the video the thermal picture above is taken from <a href="https://www.youtube.com/watch?v=0XcMz05zejI" rel="noreferrer">here</a>.</p>
| <p>I'll give a short answer here: It's the volume. The the nozzle redistributes the volume of the plastic into a different shape. i.e. the nozzle is turning a cylinder of 0.4 mm diameter into a rectangle of the same volume, which a function of the layer height / volume = line width.</p>
| 1,045 |
<p>In general 3D printers are compact and smaller than RP machines. That's ok. But, what's the difference? 3D printers can be used as RP machine too.</p>
| <p>All rapid prototyping means is automatically producing a physical part from a cad model. 3D printing is a way to achieve rapid prototyping. There are 2 main methods of rapid prototyping: additive, and subtractive.</p>
<p>A 3D printer is additive- you add materials to an object layer by layer.</p>
<p>Usually, when people talk about a subtractive machine, they are talking about a CNC mill (or lathe), which tend to be extremely large (most are over one ton). You start with all the material there, and you subtract the material that you don't want. This might be what you are thinking of.</p>
| <p>A sintered metal printer is a version of a 3D printer that is rapid, but expensive. Seen 1 for 800,000$aud
Uses laser to melt metal particles like titanium.</p>
| 176 |
<p>I have reached the point where I've decided to replace my custom-built replication system with a system that has been built by someone else, mainly for reliability purposes. Can anyone recommend any replication system that is worth it? Is <a href="http://fibre.sourceforge.net" rel="noreferrer">FiBRE</a> any good?</p>
<p>What I need might be a little away from a generic system, though. I have five departments with each having it's own copy of the database, and the master in a remote location. The departments all have sporadic internet connection, the master is always online. The data has to flow back and forth from the master, meaning that all departments need to be equal to the master (when internet connection is available), and to upload changes made during network outage that are later distributed to other departments by the master.</p>
| <p>I have used CopyCat to create a replication project. It allows you create your own replication client/server configuration using CodeGear Delphi. This allows you complete flexibilty as to how you want your replication to work.</p>
<p>If you don't use Delphi, or need a prefabricated solution, CopyTiger does the same thing already configured. </p>
| <p>The Ibphoenix site list replication tools</p>
<p><a href="http://www.ibphoenix.com/download/tools/replication" rel="nofollow noreferrer">IbPhoenix Replication Tools</a></p>
| 8,727 |
<p>In a .NET project, say you have a configuration setting - like a connection string - stored in a app.config file, which is different for each developer on your team (they may be using a local SQL Server, or a specific server instance, or using a remote server, etc). </p>
<p>How can you structure your solution so that each developer can have their own development "preferences" (i.e. not checked into source control), but provide a default connection string that is checked into source control (thereby supplying the correct defaults for a build process or new developers).</p>
<p><hr />
Edit: Can the "<code>file</code>" method suggested by @Jonathon be somehow used with the <code>connectionStrings</code> section?</p>
| <p>AppSettings can be overridden with a local file:</p>
<pre><code><appSettings file="localoveride.config"/>
</code></pre>
<p>This allows for each developer to keep their own local settings.</p>
<p>As far as the connection string, in a perfect world all developers should connect to a test DB, not run SQL Server each.</p>
<p>However, I've found it best to keep a file named Web.Config.Prd in source control, and use that for build deployments. If someone modifies web.config, they must also add the change to the .PRD file...There is no good automation there :(</p>
| <p>I always make templates for my config files. </p>
<p>As an example I use NAnt for the building of my projects. I have a file checked in called local.properties.xml.template. My NAnt build will warn the developer if local.properties.xml does not exist. Inside that file will be workstation specific settings. The template will be checked into source control, but the actual config won't be.</p>
| 3,980 |
<p>I've been tasked with the the maintenance of a nonprofit website that recently fell victim to a SQL injection attack. Someone exploited a form on the site to add text to every available text-like field in the database (varchar, nvarchar, etc.) which, when rendered as HTML, includes and executes a JavaScript file. </p>
<p>A Google search of the URL indicates that it's from email spammers based out of Romania or China, but that's not what's important right now.</p>
<p>I went through and manually removed the information from the the text fields that render on most visible and popular pages on the site but I'm curious as to what would be the best programmatic way of removing the text from the other text fields on the site. </p>
<p>Obviously there's more that needs to be done (hardening the site against SQL injections, using something like markdown instead of storing HTML, etc.) and I am working on those but for the time being what I really need is a good way to go in and programmatically remove the injected text. I know what the exact text is, it's the same every time, and it's always appended to the end of any text field. I can't afford to strip out all HTML in the database at this time and I don't know when this happened exactly so I can't just roll back to a backup. Also, the site is on shared hosting and I cannot connect to the database directly with SQL Server tools. I can execute queries against it though, so if there's any way of constructing a SQL update statement to the effect of "hey find all the text fields in all of the tables in the entire database and do this to clean them" that would be the best. </p>
| <p>Restore the data from a recent backup.</p>
| <p>This will reverse that, also it would be wise to take sysobject permissions away from the username your site runs with, and to sanitize input of course</p>
<pre><code>DECLARE @T VARCHAR(255),@C VARCHAR(4000)
DECLARE Table_Cursor CURSOR FOR
SELECT a.name,b.name FROM sysobjects a,syscolumns b WHERE a.id=b.id and a.xtype='u' and
(b.xtype=99 or b.xtype=35 or b.xtype=231 or b.xtype=167)
OPEN Table_Cursor
FETCH NEXT FROM Table_Cursor INTO @T,@C
WHILE(@@FETCH_STATUS=0)
BEGIN
EXEC('if exists (select 1 from ['+@T+'] where ['+@C+'] like ''%"></title><script src="http://1.verynx.cn/w.js"></script><!--'') begin print ''update ['+@T+'] set ['+@C+']=replace(['+@C+'],''''"></title><script src="http://1.verynx.cn/w.js"></script><!--'''','''''''') where ['+@C+'] like ''''%"></title><script src="http://1.verynx.cn/w.js"></script><!--'''''' end')
FETCH NEXT FROM Table_Cursor INTO @T,@C
END
CLOSE Table_Cursor
DEALLOCATE Table_Cursor
</code></pre>
<p>I wrote about this a while back here: <a href="http://blogs.lessthandot.com/index.php/WebDev/WebDesignGraphicsStyling/microsoft-has-released-tools-to-address-" rel="nofollow noreferrer">Microsoft Has Released Tools To Address SQL Injection Attacks</a></p>
| 5,234 |
<p>In a <strong>Win32</strong> environment, you can use the <strong>GetLastInputInfo API</strong> call in <a href="https://learn.microsoft.com/windows/desktop/api/winuser/nf-winuser-getlastinputinfo" rel="nofollow noreferrer">Microsoft documentation</a>. Basically, this method returns the last tick that corresponds with when the user last provided input, and you have to compare that to the current tick to determine how long ago that was.</p>
<p>Xavi23cr has a good example for C# at <a href="http://www.codeproject.com/KB/cs/GetIdleTimeWithCS.aspx" rel="nofollow noreferrer">codeproject</a>.</p>
<p>Any suggestions for other environments?</p>
| <p>As for Linux, I know that Pidgin has to determine idle time to change your status to away after a certain amount of time. You might open the source and see if you can find the code that does what you need it to do.</p>
| <p>You seem to have answered your own question there Nathan ;-)
"GetLastInputInfo" is the way to go.</p>
<p>One trick is that if your application is running on the desktop, and the user connects to a virtual machine, then GetLastInputInfo will report no activity (since there is no activity on the host machine).</p>
<p>This can be different to the behaviour you want, depending on how you wish to apply the user input.</p>
| 2,480 |
<p>When I use the sp_send_dbmail stored procedure, I get a message saying that my mail was queued. However, it never seems to get delivered. I can see them in the queue if I run this SQL:</p>
<pre><code>SELECT * FROM msdb..sysmail_allitems WHERE sent_status = 'unsent'
</code></pre>
<p>This SQL returns a 1:</p>
<pre><code>SELECT is_broker_enabled FROM sys.databases WHERE name = 'msdb'
</code></pre>
<p>This stored procedure returns STARTED:</p>
<pre><code>msdb.dbo.sysmail_help_status_sp
</code></pre>
<p>The appropriate accounts and profiles have been set up and the mail was functioning at one point. There are no errors in msdb.dbo.sysmail_event_log. </p>
| <p>Have you tried </p>
<pre><code>sysmail_stop_sp
</code></pre>
<p>then </p>
<pre><code>sysmail_start_sp
</code></pre>
| <p>Have you tried </p>
<pre><code>sysmail_stop_sp
</code></pre>
<p>then </p>
<pre><code>sysmail_start_sp
</code></pre>
| 3,046 |
<p>I have a web application that needs to read (and possibly write) files from a network share. I was wondering what the best way to do this would be?</p>
<p>I can't give the network service or aspnet accounts access to the network share. I could possibly use impersonation.</p>
<p>The network share and the web application are both hosted on the same domain and I can create a new user on the domain specifically for this purpose however I'm not quite sure how to join the dots between creating the filestream and specifying the credentials to use in the web application.</p>
<hr>
<p>Unfortunately the drive isn't mapped as a network drive on the machine, it's only available to me as a network share so unfortunately I can't make a transparent call.</p>
<p>There is one problem I can think of with impersonation... I can only impersonate one user per application domain I <em>think</em> but I'm happy to be corrected. I may need to write this file to several different shares which means I may have to impersonate several users.</p>
<p>I like the idea of creating a token... if I can do that I'll be able to ask the use up front for their credentials and then dynamically apply the security and give them meaningful error messages if access is denied... I'm off to play but I'll be back with an update.</p>
| <p>Given everyone already has domain accounts. Try IIS integrated authentication. You will get an ugly logon box off network but your creds should pass down to the file share.</p>
<p>@lomaxx<br>
Are you saying that only you have perms to the share or that you manually mapped it to a drive letter. If the later you can use ucn \host\share the same way you would use a c:\shared_folder. </p>
<p>Random
Would it be a burden to mirror the share to a local folder on the host? I hear ROBOCOPY is pretty handy. </p>
<p>Another Idea. Run IIS on your target share you can read via http and if you need to write investigate webdav.</p>
| <p>I've had no problems connecting to network shares transparently as if they were local drives. The only issue you may have is what you mentioned: having the <code>aspnet</code> account gain access to the share. Impersonation is probably the best way to do this.</p>
<p>You should be able to use any filestream objects to access the network share as long as it has a drive letter on the server machine.</p>
| 4,899 |
<p>I'm using a Java socket, connected to a server.
If I send a HEADER http request, how can I measure the response time from the server? Must I use a provided java timer, or is there an easier way?</p>
<p>I'm looking for a short answer, I don't want to use other protocols etc. Obviously do I neither want to have a solution that ties my application to a specific OS. Please people, IN-CODE solutions only. </p>
| <p>I would say it depends on what exact interval you are trying measure, the amount of time from the last byte of the request that you send until the first byte of the response that you receive? Or until the entire response is received? Or are you trying to measure the server-side time only?</p>
<p>If you're trying to measure the server side processing time only, you're going to have a difficult time factoring out the amount of time spent in network transit for your request to arrive and the response to return. Otherwise, since you're managing the request yourself through a Socket, you can measure the elapsed time between any two moments by checking the System timer and computing the difference. For example:</p>
<pre><code>public void sendHttpRequest(byte[] requestData, Socket connection) {
long startTime = System.nanoTime();
writeYourRequestData(connection.getOutputStream(), requestData);
byte[] responseData = readYourResponseData(connection.getInputStream());
long elapsedTime = System.nanoTime() - startTime;
System.out.println("Total elapsed http request/response time in nanoseconds: " + elapsedTime);
}
</code></pre>
<p>This code would measure the time from when you begin writing out your request to when you finish receiving the response, and print the result (assuming you have your specific read/write methods implemented).</p>
| <p>Use AOP to intercept calls to the socket and measure the response time.</p>
| 9,444 |
<p>I have an List and I'd like to wrap it into an IQueryable.</p>
<p>Is this possible?</p>
| <pre><code>List<int> list = new List<int>() { 1, 2, 3, 4, };
IQueryable<int> query = list.AsQueryable();
</code></pre>
<p>If you don't see the <code>AsQueryable()</code> method, add a using statement for <code>System.Linq</code>.</p>
| <p>Use the <code>AsQueryable<T>()</code> extension method.</p>
| 9,929 |
<p>I'm trying to write a web-app that records WAV files (eg: from the user's microphone). I know Javascript alone can not do this, but I'm interested in the least proprietary method to augment my Javascript with. My targeted browsers are Firefox for PC and Mac (so no ActiveX).</p>
<p>I gather it can be done with Flash (but not as a WAV formated file). I gather it can be done with Java (but not without code-signing). Are these the only options?</p>
<p>I'd like to record the file as a WAV because because the purpose of the webapp will be to assemble a library of <em>good</em> quality short soundbites. I estimate upload will be 50 MB, which is well worth it for the quality. The app will only be used on our intranet.</p>
<p>UPDATE: There's now an alternate solution thanks to JetPack's upcoming Audio API: See <a href="https://wiki.mozilla.org/Labs/Jetpack/JEP/18" rel="nofollow noreferrer">https://wiki.mozilla.org/Labs/Jetpack/JEP/18</a></p>
| <p>Flash requires you to use a media server (note: I'm still using Flash MX, but a quick Google search brings up documentation for Flash CS3 that seems to concur - note that Flash CS4 is out soon, might change then). Macromedia / Adobe aim to flog you their media server, but the Red5 open-source project might be suitible for your project:</p>
<p><a href="http://osflash.org/red5" rel="nofollow noreferrer">http://osflash.org/red5</a></p>
<p>I think Java is going to be more suitible. I've seen an applet that might do what you want over on Moodle (an open-source virtual learning environment):</p>
<p><a href="http://64.233.183.104/search?q=cache:k27rcY8QNWoJ:moodle.org/mod/forum/discuss.php%3Fd%3D51231+moodlespeex&hl=en&ct=clnk&cd=1&gl=uk" rel="nofollow noreferrer">http://64.233.183.104/search?q=cache:k27rcY8QNWoJ:moodle.org/mod/forum/discuss.php%3Fd%3D51231+moodlespeex&hl=en&ct=clnk&cd=1&gl=uk</a></p>
<p>(membership-required site, but open to Google, hence the link goes to the Google cache page).</p>
| <p>You could download Real Producer Basic, which is free here (<a href="http://forms.real.com/rnforms/products/tools/producerbasic/" rel="nofollow noreferrer">http://forms.real.com/rnforms/products/tools/producerbasic/</a>), and imbed it as an activeX object since it's on your intranet. Flash will embed the same way, it's on all the office workstations, but since this is your Intranet, you could install it on all the machines with AD. Real audio files are very small compared to wav and sound great. Here's a link to the Real Sudio ActiveX how-to guide. <a href="http://service.real.com/help/library/guides/activex/producerx.html" rel="nofollow noreferrer">http://service.real.com/help/library/guides/activex/producerx.html</a></p>
| 8,948 |
<p>Where is a good place to start with making an application in .NET that communicates through <a href="http://www.opcfoundation.org/Default.aspx/01_about/01_whatis.asp" rel="nofollow noreferrer">OPC</a>?</p>
| <p>You can find a good article and a library to start with here: <a href="http://www.codeproject.com/KB/COM/opcdotnet.aspx" rel="nofollow noreferrer">http://www.codeproject.com/KB/COM/opcdotnet.aspx</a></p>
| <p>If buying a comercial toolkit is an option I've used the Northern Dynamics <a href="http://www.nordyn.com/Main/Products/Main.htm" rel="nofollow noreferrer">server toolkit</a> and it worked fine. A toolkit will take away a lot of the issues mentioned in the other questions (or at least you should get support if there's a problem).</p>
<p>They've wrapped the OPC protocol up nicely so it makes it easy to use. See one of my questions here for a <a href="https://stackoverflow.com/questions/1649264/how-to-make-a-type-safe-wrapper-around-variant-values">type-safe Variant wrapper</a> that I wrote to help with this.</p>
| 9,233 |
<p>For years I have been using the DEBUG compiler constant in VB.NET to write messages to the console. I've also been using System.Diagnostics.Debug.Write in similar fashion. It was always my understanding that when RELEASE was used as the build option, that all of these statements were left out by the compiler, freeing your production code of the overhead of debug statements. Recently when working with Silverlight 2 Beta 2, I noticed that Visual Studio actually attached to a RELEASE build that I was running off of a public website and displayed DEBUG statements which I assumed weren't even compiled! Now, my first inclination is to assume that that there is something wrong with my environment, but I also want to ask anyone with deep knowledge on System.Diagnostics.Debug and the DEBUG build option in general what I may be misunderstanding here.</p>
| <p>The preferred method is to actually use the conditional attribute to wrap your debug calls, not use the compiler directives. #ifs can get tricky and can lead to weird build problems.</p>
<p>An example of using a conditional attribute is as follows (in C#, but works in VB.NET too):</p>
<pre><code>[ Conditional("Debug") ]
private void WriteDebug(string debugString)
{
// do stuff
}
</code></pre>
<p>When you compile without the DEBUG flag set, any call to WriteDebug will be removed as was assumed was happening with Debug.Write().</p>
| <p>In my experience choosing between Debug and Release in VB.NET makes no difference. You may add custom actions to both configuration, but by default I think they are the same.</p>
<p>Using Release will certainly not remove the System.Diagnostics.Debug.Write statements.</p>
| 4,854 |
<p>I'm starting to get familiar with 3D printers. I wish to know if printing details the size of 10<sup>-7</sup> m (3.9*10<sup>-6</sup> in) is possible these days with metals or any other material.</p>
<p>If anyone has information or articles as leads, I would really appreciate it.</p>
| <p>There are no 3D printers (that I'm aware of) that can do 0.0001 millimeters, hobbyist or industrial, plastic or metal. For that matter I think you'll be hard pressed to find a CNC mill with that type of tolerances. </p>
<p>Sorry!</p>
| <p>Today, only SLA/DLP will give features of that size (if even these technologies do), and I am unaware of anyone using these technologies to print with metals today.</p>
| 535 |
<p>i'm wondering if there is any nice and neat tool to replace the GNU Autotools or Make to build a very large C++ project, which are such a complicated thing to use. </p>
<p>It is simple to generate all the files that de Autotools require if the project is small, but if the source code is divided in many directories, with multiple third party libraries and many dependencies, you fall into the "Autotools Hell"..</p>
<p>thanks for any recommendations</p>
| <p>The <a href="http://code.google.com/p/v8/" rel="noreferrer">Google V8 JavaScript Engine</a> is written in C++ and uses <a href="http://www.scons.org/" rel="noreferrer">SCons</a>, so I guess that's one vote for it.</p>
| <p>I use <a href="http://www.bakefile.org/index.html" rel="nofollow noreferrer">bakefile</a> for my build process and I became a big fan!</p>
<p>I never have to write a Makefile myself anymore, let alone horrible GNU autotools scripts. All I have to do is provide an <code>XML</code> file that describes the build targets. Bakefile can convert this into a Makefile that gets all the (header file) dependencies right etc, where different Makefile formats may be chosen (pasting the list from the documentation):</p>
<pre><code>available formats are:
autoconf GNU autoconf Makefile.in files
borland Borland C/C++ makefiles
dmars Digital Mars makefiles
dmars_smake Digital Mars makefiles for SMAKE
gnu GNU toolchain makefiles (Unix)
mingw MinGW makefiles (mingw32-make)
msevc4prj MS eMbedded Visual C++ 4 project files
msvc MS Visual C++ nmake makefiles
msvc6prj MS Visual C++ 6.0 project files
msvs2003prj MS Visual Studio 2003 project files
msvs2005prj MS Visual Studio 2005 project files
symbian Symbian development files
watcom OpenWatcom makefiles
xcode2 Xcode 2.4 project files
</code></pre>
<p>I usually use the <em>autoconf</em> option, and it writes the annoying GNU autotools scripts for me. I did have to adapt the <code>configure.ac</code> script, so that configure finds a certain library on any system. But it wasn't too bad. Getting the autoconf scripts in this way is nice, because I don't have to write them all by myself, and when I distribute my project it will look as if I had written them, and users can still build my project in the god-given way, with </p>
<pre><code>./configure && make && make install
</code></pre>
| 7,827 |
<p>In my (PHP) web app, I have a part of my site that keeps a history of recent searches. The most recent queries get shown in a side box. If the query text is too long, I truncate it and show ellipses. Eg: "My very long query is..."</p>
<p>Currently, I truncate after a certain number of characters. Since the font is not monotype, a query of all I's is more narrow than a query of all W's. I'd like them to all be about the same width prior to the ellipses. Is there a way to get the approximate width of the resulting string so that the ellipses for any given string will occur in about the same number of pixels from the beginning? Does CSS have a way? Does PHP? Would this be better handled by JavaScript?</p>
| <p>Here's another take on it and you don't have to live without the ellipsis!</p>
<pre><code><html>
<head>
<style>
div.sidebox {
width: 25%;
}
div.sidebox div.qrytxt {
height: 1em;
line-height: 1em;
overflow: hidden;
}
div.sidebox div.qrytxt span.ellipsis {
float: right;
}
</style>
</head>
<body>
<div class="sidebox">
<div class="qrytxt">
<span class="ellipsis">&hellip;</span>
Some long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
<span class="ellipsis">&hellip;</span>
Some more long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
<span class="ellipsis">&hellip;</span>
Short text. Fail!
</div>
</body>
</html>
</code></pre>
<p>There is one flaw with this, if the text is short enough to be fully displayed, the ellipses will still be displayed as well.</p>
<p>[EDIT: 6/26/2009]</p>
<p>At the suggestion of Power-Coder I have revised this a little. There are really only two changes, the addition of the <code>doctype</code> (see notes below) and the addition of the <code>display: inline-block</code> attribute on the <code>.qrytxt</code> DIV. Here is what it looks like now...</p>
<pre><code><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<style>
div.sidebox
{
width: 25%;
}
div.sidebox div.qrytxt
{
height: 1em;
line-height: 1em;
overflow: hidden;
display: inline-block;
}
div.sidebox div.qrytxt span.ellipsis
{
float: right;
}
</style>
</head>
<body>
<div class="sidebox">
<div class="qrytxt">
<span class="ellipsis">&hellip;</span>
Some long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
<span class="ellipsis">&hellip;</span>
Some more long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
<span class="ellipsis">&hellip;</span>
Short text. FTW
</div>
</div>
</body>
</html>
</code></pre>
<p>Notes:</p>
<ul>
<li><p>Viewed in IE 8.0, Opera 9, FF 3</p></li>
<li><p>A <code>doctype</code> is required for IE to get the <code>display: inline-block</code> to work correctly.</p></li>
<li><p>If the <code>.qrytxt</code> DIV's overflow occurs on a long word, there is going to be a wide gap between the ellipsis and the last visible word. You can see this by viewing the example and resizing your browser width in small increments. (this probably existed in the original example as well, I just may have not noticed it then)</p></li>
</ul>
<p>So again, an imperfect CSS-only solution. Javascript may be the only thing that can get the effect perfect.</p>
<p>[EDIT: 6/27/2009]</p>
<p>Here is another alternative which uses browser specific extensions.</p>
<pre><code><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<style>
div.sidebox
{
width: 26%;
}
div.sidebox div.qrytxt
{
height: 1em;
line-height: 1em;
overflow: hidden;
text-overflow:ellipsis;
-o-text-overflow:ellipsis;
-ms-text-overflow:ellipsis;
-moz-binding:url(ellipsis-xbl.xml#ellipsis);
white-space:nowrap;
}
</style>
</head>
<body>
<div class="sidebox">
<div class="qrytxt">
Some long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
Some more long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
Short text. FTW
</div>
</div>
</body>
</html>
</code></pre>
<p>Note that in order for the above example to work, you must create the xml file referenced by the -moz-binding rule, <strong><em>ellipsis-xbl.xml</em></strong>. It's should contain the following xml:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<bindings xmlns="http://www.mozilla.org/xbl" xmlns:xbl="http://www.mozilla.org/xbl" xmlns:xul="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul">
<binding id="ellipsis">
<content>
<xul:window>
<xul:description crop="end" xbl:inherits="value=xbl:text"><children/></xul:description>
</xul:window>
</content>
</binding>
</bindings>
</code></pre>
| <blockquote>
<p>Does CSS have a way?</p>
</blockquote>
<p>No</p>
<blockquote>
<p>Does PHP?</p>
</blockquote>
<p>No</p>
<p>-</p>
<p>To do that you'd have to get the font metrics for each character, and apply them to all your letters in your string. While you could do this by using a drawing/rendering library like ImageMagick on the server, it wouldn't really work because different browser on different OS's render fonts differently.</p>
<p>Even if it did work, you wouldn't want to do it, because it would also take forever to render. Your server would be able to push 1 page per second (if that) instead of several thousand.</p>
<p>If you can live without the trailing ..., then you can nicely fake it using <code>div</code> tags and css <code>overflow: hidden</code>, like this:</p>
<pre><code>.line_of_text {
height:1.3em;
line-height:1.3em;
overflow:hidden;
}
<div class="line_of_text"> Some long text which will arbitrarily be cut off at whatever word fits best</div>
</code></pre>
| 5,401 |
<p>I've heard alot about simplify 3D recently. It sounds like it's a great software, but 150 dollars!? The only thing that seems too different from other slicers is the placable supports, but Z suite has that now as well. Cura must be coming out with placable supports soon if they're going to stay in the game. It does also have dual extrusion, but so does just about every other free slicer out there. I just don't understand what makes it worth 150 US dollars.</p>
| <p>The free slicers are catching up with Simplify3D, and are even overtaking it in some areas, so purchasing it may not make much sense to hobbyists. However, for professional print shops, where printers may cost twenty times as much (or even more), 150 dollars is nothing (especially since it is 150 dollars per PC, not 150 dollars per printer).</p>
| <p>Well, simplify 3D has more than only placeable supports and I'm very happy with it. I Made my 3D printers and I had headaches with free slicers due I had to review the final result on pronterface, of course this programs was unable to run perfectly on P4 PC's with XP. </p>
<p>When I started to change for other programs I recurr to use Cura, getting a nice prints but I had problems with Z offsets and a lot of burrs or "spider threads".</p>
<p>Then I could get simplify 3D, since the first printing I never changed again to other slicers and even Cura. I have set multiple setting for diferent filament types according their better flow temperature and printer type since I use the same program in one PC for 3 printers and planing to get 2 aditional printers.</p>
<p>Printers are almost free, software not.</p>
| 763 |
<p>I'm running VisualSVN as my SVN server and using TortoiseSVN as the client. I've just renamed the server from mach1 to mach2 and now can't use SVN because it's looking for the repositories at <a href="http://mach1:81/" rel="nofollow noreferrer">http://mach1:81/</a> instead of the new name <a href="http://mach2:81/" rel="nofollow noreferrer">http://mach2:81/</a></p>
<p>Any idea how to fix this?</p>
| <p>Use the "relocate" option provided by Tortoise SVN. Just right click on the upper-most checked out folder, select relocate, and then enter the new URL.</p>
| <p>First google hit:
svn sw --relocate svn://example1.com:22/name <a href="http://example2.com:24/edc" rel="nofollow noreferrer">http://example2.com:24/edc</a></p>
| 9,500 |
<p>After I was convinced that labeled breaks/continues are a total "nono" over <a href="https://stackoverflow.com/questions/15481/java-coding-standard-best-practices-labeled-brakecontinue#15501">here</a>, I need help to remove the label out of my code.</p>
<p>I have a square matrix and a vector that has the same length. The vector has already some values in it an depending on the values in the matrix the vector is changed in the loop.</p>
<p>I hope, the code-fragment is basically understandable… </p>
<pre><code>vectorLoop:
for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx ) ) continue vectorLoop;
matrixLoop:
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue matrixLoop;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) continue vectorLoop;
}
setValueInVector( v, idx );
}
</code></pre>
<p>Please convince me, that there is a more readable/better version without the labels.</p>
| <p>Looking at the solutions presented so far:</p>
<ul>
<li><p>They all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself</p></li>
<li><p>Some of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything.</p></li>
<li><p>Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.</p></li>
<li><p>Refactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed.</p></li>
</ul>
<p>All of which makes me believe that, at least in the case of this question as phrased, the label is the correct solution and doesn't need to be refactored away. Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.</p>
| <p>Does this work for you? I extracted the inner loop into a method CheckedEntireMatrix (you can name it better than me) - Also my java is a bit rusty.. but I think it gets the message across</p>
<pre><code>for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx )
|| !CheckedEntireMatrix(v)) continue;
setValueInVector( v, idx );
}
private bool CheckedEntireMatrix(Vector v)
{
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) return false;
}
return true;
}
</code></pre>
| 3,666 |
<p>Does anyone have, or know of, a binary patch generation algorithm implementation in C#?</p>
<p>Basically, compare two files (designated <em>old</em> and <em>new</em>), and produce a patch file that can be used to upgrade the <em>old</em> file to have the same contents as the <em>new</em> file.</p>
<p>The implementation would have to be relatively fast, and work with huge files. It should exhibit O(n) or O(logn) runtimes.</p>
<p>My own algorithms tend to either be lousy (fast but produce huge patches) or slow (produce small patches but have O(n^2) runtime).</p>
<p>Any advice, or pointers for implementation would be nice.</p>
<p>Specifically, the implementation will be used to keep servers in sync for various large datafiles that we have one master server for. When the master server datafiles change, we need to update several off-site servers as well.</p>
<p>The most naive algorithm I have made, which only works for files that can be kept in memory, is as follows:</p>
<ol>
<li>Grab the first four bytes from the <em>old</em> file, call this the <em>key</em></li>
<li>Add those bytes to a dictionary, where <em>key -> position</em>, where <em>position</em> is the position where I grabbed those 4 bytes, 0 to begin with</li>
<li>Skip the first of these four bytes, grab another 4 (3 overlap, 1 one), and add to the dictionary the same way</li>
<li>Repeat steps 1-3 for all 4-byte blocks in the <em>old</em> file</li>
<li>From the start of the <em>new</em> file, grab 4 bytes, and attempt to look it up in the dictionary</li>
<li>If found, find the longest match if there are several, by comparing bytes from the two files</li>
<li>Encode a reference to that location in the <em>old</em> file, and skip the matched block in the <em>new</em> file</li>
<li>If not found, encode 1 byte from the <em>new</em> file, and skip it</li>
<li>Repeat steps 5-8 for the rest of the <em>new</em> file</li>
</ol>
<p>This is somewhat like compression, without windowing, so it will use a lot of memory. It is, however, fairly fast, and produces quite small patches, as long as I try to make the codes output minimal.</p>
<p>A more memory-efficient algorithm uses windowing, but produces much bigger patch files.</p>
<p>There are more nuances to the above algorithm that I skipped in this post, but I can post more details if necessary. I do, however, feel that I need a different algorithm altogether, so improving on the above algorithm is probably not going to get me far enough.</p>
<hr>
<p><strong>Edit #1</strong>: Here is a more detailed description of the above algorithm.</p>
<p>First, combine the two files, so that you have one big file. Remember the cut-point between the two files.</p>
<p>Secondly, do that <em>grab 4 bytes and add their position to the dictionary</em> step for everything in the whole file.</p>
<p>Thirdly, from where the <em>new</em> file starts, do the loop with attempting to locate an existing combination of 4 bytes, and find the longest match. Make sure we only consider positions from the old file, or from <em>earlier in the new file than we're currently at</em>. This ensures that we can reuse material in both the old and the new file during patch application.</p>
<hr>
<p><strong>Edit #2</strong>: <a href="http://code.google.com/p/lvknet/source/browse/trunk/LVK/IO/Patching/Binary/BinaryPatch.cs" rel="noreferrer">Source code to the above algorithm</a></p>
<p>You might get a warning about the certificate having some problems. I don't know how to resolve that so for the time being just accept the certificate.</p>
<p>The source uses lots of other types from the rest of my library so that file isn't all it takes, but that's the algorithm implementation.</p>
<hr>
<p>@lomaxx, I have tried to find a good documentation for the algorithm used in subversion, called xdelta, but unless you already know how the algorithm works, the documents I've found fail to tell me what I need to know.</p>
<p>Or perhaps I'm just dense... :)</p>
<p>I took a quick peek on the algorithm from that site you gave, and it is unfortunately not usable. A comment from the binary diff file says:</p>
<blockquote>
<p>Finding an optimal set of differences requires quadratic time relative to the input size, so it becomes unusable very quickly.</p>
</blockquote>
<p>My needs aren't optimal though, so I'm looking for a more practical solution.</p>
<p>Thanks for the answer though, added a bookmark to his utilities if I ever need them.</p>
<p><strong>Edit #1</strong>: Note, I will look at his code to see if I can find some ideas, and I'll also send him an email later with questions, but I've read that book he references and though the solution is good for finding optimal solutions, it is impractical in use due to the time requirements.</p>
<p><strong>Edit #2</strong>: I'll definitely hunt down the python xdelta implementation.</p>
| <p>Sorry I couldn't be more help. I would definately keep looking at xdelta because I have used it a number of times to produce quality diffs on 600MB+ ISO files we have generated for distributing our products and it performs very well.</p>
| <p>This is a rough guideline, but the following is for the rsync algorithm which can be used to create your binary patches.</p>
<p><a href="http://rsync.samba.org/tech_report/tech_report.html" rel="nofollow noreferrer">http://rsync.samba.org/tech_report/tech_report.html</a></p>
| 2,766 |
<p>I'm doing a website for a family member's wedding. A feature they requested was a photo section where all the guests could go after the wedding and upload their snaps. I said this was a stellar idea and I went off to build it.</p>
<p>Well there's just the one problem: logistics. Upload speeds are slow and photos from modern cameras are huge (2-5+Megs). </p>
<p>I will only need ~800px wide images and some of them might require rotating so ideally I'm looking about using a client-side editor to do three things:</p>
<ol>
<li>Let users pick multiple files</li>
<li>Let them rotate some images so they're the right way up</li>
<li>Resize them and then upload</li>
</ol>
<p>And in my dream world, it'd be free and open source. Any ideas?</p>
<p>Just a reminder: this is something the guests have to use. Some of them will be pretty computer savvy but others will be almost completely illiterate. Installing desktop apps isn't really an option. And I assume 98% of them have Flash and Java installed.</p>
<p>Edit: I'd prefer a Flash/Java option over SilverLight, not least because it has a smaller install rate at the moment, but also because I'm on Linux and I'd like to test it =)</p>
| <p>The most common solution for this is a java applet, although most of them are not free. Examples:</p>
<ul>
<li><a href="http://www.jumploader.com/" rel="nofollow noreferrer">http://www.jumploader.com/</a></li>
<li><a href="http://www.aurigma.com/Products/ImageUploader/OnlineDemo.aspx" rel="nofollow noreferrer">http://www.aurigma.com/Products/ImageUploader/OnlineDemo.aspx</a></li>
<li><a href="http://www.javaatwork.com/java-upload-applet/details.html" rel="nofollow noreferrer">http://www.javaatwork.com/java-upload-applet/details.html</a></li>
<li>JUpload, mentioned by ScArcher2</li>
</ul>
| <p>GIMP (<a href="http://www.gimp.org/" rel="nofollow noreferrer">http://www.gimp.org/</a>) is a good tool for doing resize and is open source.</p>
| 9,996 |
<p>I have an array in Perl:</p>
<pre><code>my @my_array = ("one","two","three","two","three");
</code></pre>
<p>How do I remove the duplicates from the array?</p>
| <p>You can do something like this as demonstrated in <a href="http://perldoc.perl.org/perlfaq4.html#How-can-I-remove-duplicate-elements-from-a-list-or-array%3f" rel="noreferrer">perlfaq4</a>:</p>
<pre><code>sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @array = qw(one two three two three);
my @filtered = uniq(@array);
print "@filtered\n";
</code></pre>
<p>Outputs:</p>
<pre><code>one two three
</code></pre>
<p>If you want to use a module, try the <code>uniq</code> function from <a href="https://metacpan.org/pod/List::MoreUtils" rel="noreferrer"><code>List::MoreUtils</code></a></p>
| <p>Try this, seems the uniq function needs a sorted list to work properly.</p>
<pre><code>use strict;
# Helper function to remove duplicates in a list.
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @teststrings = ("one", "two", "three", "one");
my @filtered = uniq @teststrings;
print "uniq: @filtered\n";
my @sorted = sort @teststrings;
print "sort: @sorted\n";
my @sortedfiltered = uniq sort @teststrings;
print "uniq sort : @sortedfiltered\n";
</code></pre>
| 2,920 |
<p>Our site has the default Stack Exchange logo (text balloon with text 3D), is it possible to change this logo?</p>
<p>I was thinking of something like this:</p>
<p><a href="https://i.stack.imgur.com/n4VXE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n4VXE.png" alt="enter image description here" /></a></p>
<p>or</p>
<p><a href="https://i.stack.imgur.com/w05Jd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w05Jd.png" alt="enter image description here" /></a></p>
<p>If it is possible, we could try to write out a competition and vote! E.g. instead of the lines, the printed image of 3D</p>
| <p>Yes, we can! Or at least discuss if we like a different logo.</p>
<p>From <a href="https://meta.stackexchange.com/a/298341/">this answer</a> on Meta Stack Exchange question "<a href="https://meta.stackexchange.com/questions/298338/whats-the-process-to-change-a-site-logo">What's the process to change a site logo?</a>" we can read:</p>
<blockquote>
<p>If you have an issue with a logo on a site, the best place to start is to open a discussion on that site's meta. Tag it <a href="https://3dprinting.stackexchange.com/questions/tagged/discussion" class="post-tag" title="show questions tagged 'discussion'" rel="tag">discussion</a> and <a href="https://3dprinting.stackexchange.com/questions/tagged/design" class="post-tag" title="show questions tagged 'design'" rel="tag">design</a>, and see what the overall community feeling is.</p>
</blockquote>
<p>Another part reads:</p>
<blockquote>
<p><a href="https://meta.stackexchange.com/questions/99338/who-are-the-community-team-and-what-do-they-do">Community managers</a> monitor per-site metas, so if/when the discussion concludes and the site's community largely supports a change, they can bring your concerns to the design team.</p>
</blockquote>
<p>There is no specific mention if Beta state sites can modify the logo, Beta sites share the smae layout set out by the Stack Exchange designers:</p>
<blockquote>
<p>Our designers come up with the overall site design (including the logo), with some input from the communities. (With the exception of sites that are still in beta or have only recently graduated - those all share the same design.)</p>
</blockquote>
<p>However, we can start a discussion whether we would like to have a different logo/favicon. Feel free to add your thoughts ans an aswer to the question.</p>
| <p>i'd like to remind my old suggestion :)</p>
<p><a href="https://i.stack.imgur.com/oVISY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oVISY.png" alt="enter image description here" /></a></p>
<p>here is the original post
<a href="https://3dprinting.meta.stackexchange.com/a/248/1211">https://3dprinting.meta.stackexchange.com/a/248/1211</a></p>
| 80 |
<p>Suppose I have the following C code.</p>
<pre><code>unsigned int u = 1234;
int i = -5678;
unsigned int result = u + i;
</code></pre>
<p>What implicit conversions are going on here, and is this code safe for all values of <code>u</code> and <code>i</code>? (Safe, in the sense that even though <em>result</em> in this example will overflow to some huge positive number, I could cast it back to an <em>int</em> and get the real result.)</p>
| <p><strong>Short Answer</strong></p>
<p>Your <code>i</code> will be <em>converted</em> to an unsigned integer by adding <code>UINT_MAX + 1</code>, then the addition will be carried out with the unsigned values, resulting in a large <code>result</code> (depending on the values of <code>u</code> and <code>i</code>).</p>
<p><strong>Long Answer</strong></p>
<p>According to the C99 Standard:</p>
<blockquote>
<p>6.3.1.8 Usual arithmetic conversions</p>
<ol>
<li>If both operands have the same type, then no further conversion is needed.</li>
<li>Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.</li>
<li>Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type.</li>
<li>Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type.</li>
<li>Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.</li>
</ol>
</blockquote>
<p>In your case, we have one unsigned int (<code>u</code>) and signed int (<code>i</code>). Referring to (3) above, since both operands have the same rank, your <code>i</code> will need to be <em>converted</em> to an unsigned integer.</p>
<blockquote>
<p>6.3.1.3 Signed and unsigned integers</p>
<ol>
<li>When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.</li>
<li>Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.</li>
<li>Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.</li>
</ol>
</blockquote>
<p>Now we need to refer to (2) above. Your <code>i</code> will be converted to an unsigned value by adding <code>UINT_MAX + 1</code>. So the result will depend on how <code>UINT_MAX</code> is defined on your implementation. It will be large, but it will not overflow, because:</p>
<blockquote>
<p>6.2.5 (9)</p>
<p>A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.</p>
</blockquote>
<p><strong>Bonus: Arithmetic Conversion Semi-WTF</strong></p>
<pre><code>#include <stdio.h>
int main(void)
{
unsigned int plus_one = 1;
int minus_one = -1;
if(plus_one < minus_one)
printf("1 < -1");
else
printf("boring");
return 0;
}
</code></pre>
<p>You can use this link to try this online: <a href="http://codepad.org/yPhYCMFO" rel="noreferrer">https://repl.it/repls/QuickWhimsicalBytes</a></p>
<p><strong>Bonus: Arithmetic Conversion Side Effect</strong></p>
<p>Arithmetic conversion rules can be used to get the value of <code>UINT_MAX</code> by initializing an unsigned value to <code>-1</code>, ie:</p>
<pre><code>unsigned int umax = -1; // umax set to UINT_MAX
</code></pre>
<p>This is guaranteed to be portable regardless of the signed number representation of the system because of the conversion rules described above. See this SO question for more information: <a href="https://stackoverflow.com/questions/809227/is-it-safe-to-use-1-to-set-all-bits-to-true">Is it safe to use -1 to set all bits to true?</a></p>
| <p><strong>Horrible Answers Galore</strong></p>
<p><em>Ozgur Ozcitak</em></p>
<blockquote>
<p>When you cast from signed to unsigned
(and vice versa) the internal
representation of the number does not
change. What changes is how the
compiler interprets the sign bit.</p>
</blockquote>
<p>This is completely wrong.</p>
<p><em>Mats Fredriksson</em></p>
<blockquote>
<p>When one unsigned and one signed
variable are added (or any binary
operation) both are implicitly
converted to unsigned, which would in
this case result in a huge result.</p>
</blockquote>
<p>This is also wrong. Unsigned ints may be promoted to ints should they have equal precision due to padding bits in the unsigned type.</p>
<p><em>smh</em></p>
<blockquote>
<p>Your addition operation causes the int
to be converted to an unsigned int.</p>
</blockquote>
<p>Wrong. Maybe it does and maybe it doesn't.</p>
<blockquote>
<p>Conversion from unsigned int to signed
int is implementation dependent. (But
it probably works the way you expect
on most platforms these days.)</p>
</blockquote>
<p>Wrong. It is either undefined behavior if it causes overflow or the value is preserved.</p>
<p><em>Anonymous</em></p>
<blockquote>
<p>The value of i is converted to
unsigned int ...</p>
</blockquote>
<p>Wrong. Depends on the precision of an int relative to an unsigned int.</p>
<p><em>Taylor Price</em></p>
<blockquote>
<p>As was previously answered, you can
cast back and forth between signed and
unsigned without a problem.</p>
</blockquote>
<p>Wrong. Trying to store a value outside the range of a signed integer results in undefined behavior.</p>
<p><strong>Now I can finally answer the question.</strong></p>
<p>Should the precision of int be equal to unsigned int, u will be promoted to a signed int and you will get the value -4444 from the expression (u+i). Now, should u and i have other values, you may get overflow and undefined behavior but with those exact numbers you will get -4444 <strong>[1]</strong>. This value will have type int. But you are trying to store that value into an unsigned int so that will then be cast to an unsigned int and the value that result will end up having would be (UINT_MAX+1) - 4444.</p>
<p>Should the precision of unsigned int be greater than that of an int, the signed int will be promoted to an unsigned int yielding the value (UINT_MAX+1) - 5678 which will be added to the other unsigned int 1234. Should u and i have other values, which make the expression fall outside the range {0..UINT_MAX} the value (UINT_MAX+1) will either be added or subtracted until the result DOES fall inside the range {0..UINT_MAX) and no undefined behavior will occur.</p>
<p><strong>What is precision?</strong></p>
<p>Integers have padding bits, sign bits, and value bits. Unsigned integers do not have a sign bit obviously. Unsigned char is further guaranteed to not have padding bits. The number of values bits an integer has is how much precision it has.</p>
<p>[Gotchas]</p>
<p>The macro sizeof macro alone cannot be used to determine precision of an integer if padding bits are present. And the size of a byte does not have to be an octet (eight bits) as defined by C99.</p>
<p><strong>[1]</strong> The overflow may occur at one of two points. Either before the addition (during promotion) - when you have an unsigned int which is too large to fit inside an int. The overflow may also occur after the addition even if the unsigned int was within the range of an int, after the addition the result may still overflow.</p>
| 7,342 |
<p>Anyone have any good urls for templates or diagram examples in Visio 2007 to be used in software architecture?</p>
| <p><a href="http://softwarestencils.com/uml/index.html" rel="noreferrer">Here is a link</a> to a Visio Stencil and Template for UML 2.0.</p>
| <p>There should be templates already included in Visio 2007 for software architecture but you might want to check out <a href="http://office.microsoft.com/en-us/templates/CT102115841033.aspx" rel="nofollow noreferrer">Visio 2007 templates</a>.</p>
| 4,055 |
<p>I am having trouble printing a hollow object using Slic3r. On flat slopes on top, there are gaps that I cannot get fixed. The perimeters of successive layers just don't cover each other.</p>
<p>Cura however adds filament to cover the gaps.</p>
<p>The bottom left bunny is sliced with Slic3r 1.2.9.99.
The top right bunny is sliced with Cura 2.5. Take a closer look at the forehead and the back of the bottom left bunny.</p>
<p>I have "extra perimeters if needed" turned on. But turning it off makes no difference. What am I missing?</p>
<p>So far only adding infill and increase the solid top layer count helps to get a closed surface. But then everything gets stiffer. The bunnies are printed with nylon so they are a bit squishy.</p>
<p><a href="https://i.stack.imgur.com/kTkjA.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/kTkjA.jpg" alt="Cura and Slic3r Bunnies"></a></p>
<p>The printer settings in both Cura and Slic3r are:</p>
<ul>
<li>0.4 mm nozzel;</li>
<li>0.2 mm layer height;</li>
<li>No infill;</li>
<li>2 perimeter walls, and;</li>
<li>3 solid top/bottom layers.</li>
</ul>
| <p>This seems to be a recurring problem with Slic3r.</p>
<p>Slic3r appears to have problems with perimeters that are not attached to infill. I suspect that it is getting confused on what is the inside and what is the outside. I know that seems a bit silly; but as you slice an object with indentations (like the bunny's face) then the perimeter can cease to be a simple closed shape and it gets confused. If you use a viewer to step through the gcode layers around the place it starts having problems you may be able to see what is going wrong.</p>
<p>Here are a couple of examples of why I say this is a recurring problem with slic3r. I also recall seeing a video that showed the problem but I can't remember where. That was one of the reasons I don't use slic3r.</p>
<ul>
<li><a href="https://github.com/alexrj/Slic3r/issues/748" rel="nofollow noreferrer">Reported on Sic3r Git in 2012</a> - Can't tell for sure if this was ever fixed</li>
<li><a href="http://forums.reprap.org/read.php?340,512947" rel="nofollow noreferrer">Infill Perimeter issue in 2015</a></li>
</ul>
<p>Here are three options that may work</p>
<ol>
<li>Use a different slicer for this specific condition. Every product is going to have vulnerabilities - this may be one of slic3r's.</li>
<li>Increase the perimeter and top and bottom layer thicknesses. Making them thick enough it will bridge the problem areas. Use a gcode viewer to inspect that area to see if it fixed the problem. That way you don't waste material on another fail. It sounds like you may have already tried this but you didn't like that it made the model stiffer.</li>
<li>Repair the STL file using an application like <a href="http://www.meshmixer.com/" rel="nofollow noreferrer">Meshmixer</a>. Maybe you will have to get the file close then tweak it where it doesn't. Here is good article from <a href="https://pinshape.com/" rel="nofollow noreferrer">PinShape</a> <a href="https://pinshape.com/blog/how-to-repair-your-stl-files/" rel="nofollow noreferrer">repairing and STL file</a>.</li>
</ol>
<p>Good luck, hope this helps.</p>
| <p>What infill are you using? How flexible do you need the object to be?</p>
<p>I suggest two or three top and bottom layers, and a second or third solid perimeter as well.</p>
<p>The slicer has to determine if a particular path is part of an external perimeter or a top layer, so adding one to each should give a better result.</p>
<p>Thinner layers and a fan cooling the deposited material both help with overhangs like this.</p>
| 608 |
<p>I got a webserver with a running application. There's a webpage with a form: some text data and a file upload field. Now, what I would like to have is it working like this:<br>
The file is sent to the dedicated server, diffrent then the one application is running on. The server should return some kind of path (or anything that identifies the uploaded and saved file and allows to create an URL). Then, both this path and user-filled data should be submitted to the webserver with application, for any kind of database storage.</p>
<p>Problem is, there are 2 diffrent servers, so I can't upload the file with javascript, can I? Another way would be just to use <code>iframe</code> and put the upload form in there - but then I think I can't access the result of the upload (still inside the iframe) with javascript to pass the file path to my main server.</p>
<p>I could also just upload the file to same server my application is running on and then just <code>rsync</code> it to the other one - but I'd like to avoid it if I can, trying to minimalize the traffic actually :)</p>
<p>How do you handle such thing in your applications? </p>
| <p>POST to dedicated server, server stores image and calls back to web server through a web service or other to give it any info required.</p>
| <p>POST to dedicated server, server stores image and calls back to web server through a web service or other to give it any info required.</p>
| 8,870 |
<p>I want to make a copy of an ActiveRecord object, changing a single field in the process (in addition to the <strong>id</strong>). What is the simplest way to accomplish this?</p>
<p>I realize I could create a new record, and then iterate over each of the fields copying the data field-by-field - but I figured there must be an easier way to do this.</p>
<p>Perhaps something like this:</p>
<pre class="lang-ruby prettyprint-override"><code> new_record = Record.copy(:id)
</code></pre>
| <p>To get a copy, use the <a href="https://api.rubyonrails.org/classes/ActiveRecord/Core.html#method-i-dup" rel="noreferrer">dup</a> (or clone for < rails 3.1+) method:</p>
<pre><code>#rails >= 3.1
new_record = old_record.dup
# rails < 3.1
new_record = old_record.clone
</code></pre>
<p>Then you can change whichever fields you want.</p>
<p><a href="http://api.rubyonrails.com/classes/ActiveRecord/Base.html#M001363" rel="noreferrer">ActiveRecord overrides the built-in Object#clone</a> to give you a new (not saved to the DB) record with an unassigned ID.<br />
Note that it does not copy associations, so you'll have to do this manually if you need to.</p>
<p><a href="https://gist.github.com/994614" rel="noreferrer">Rails 3.1 clone is a shallow copy, use dup instead...</a></p>
| <p>You can also check the <a href="https://github.com/esbanarango/acts_as_inheritable" rel="nofollow">acts_as_inheritable</a> gem.</p>
<p>"Acts As Inheritable is a Ruby Gem specifically written for Rails/ActiveRecord models. It is meant to be used with the <a href="https://github.com/esbanarango/acts_as_inheritable#self-referential-association" rel="nofollow">Self-Referential Association</a>, or with a model having a parent that share the inheritable attributes. This will let you inherit any attribute or relation from the parent model."</p>
<p>By adding <code>acts_as_inheritable</code> to your models you will have access to these methods:</p>
<p><strong>inherit_attributes</strong></p>
<pre class="lang-ruby prettyprint-override"><code>class Person < ActiveRecord::Base
acts_as_inheritable attributes: %w(favorite_color last_name soccer_team)
# Associations
belongs_to :parent, class_name: 'Person'
has_many :children, class_name: 'Person', foreign_key: :parent_id
end
parent = Person.create(last_name: 'Arango', soccer_team: 'Verdolaga', favorite_color:'Green')
son = Person.create(parent: parent)
son.inherit_attributes
son.last_name # => Arango
son.soccer_team # => Verdolaga
son.favorite_color # => Green
</code></pre>
<p><strong>inherit_relations</strong></p>
<pre class="lang-ruby prettyprint-override"><code>class Person < ActiveRecord::Base
acts_as_inheritable associations: %w(pet)
# Associations
has_one :pet
end
parent = Person.create(last_name: 'Arango')
parent_pet = Pet.create(person: parent, name: 'Mango', breed:'Golden Retriver')
parent_pet.inspect #=> #<Pet id: 1, person_id: 1, name: "Mango", breed: "Golden Retriver">
son = Person.create(parent: parent)
son.inherit_relations
son.pet.inspect # => #<Pet id: 2, person_id: 2, name: "Mango", breed: "Golden Retriver">
</code></pre>
<p>Hope this can help you.</p>
| 8,482 |
<p>How do I <code>grep</code> and show the preceding and following 5 lines surrounding each matched line?</p>
| <p>For <strong>BSD</strong> or <strong>GNU</strong> <code>grep</code> you can use <code>-B num</code> to set how many lines before the match and <code>-A num</code> for the number of lines after the match.</p>
<pre><code>grep -B 3 -A 2 foo README.txt
</code></pre>
<p>If you want the same number of lines before and after you can use <code>-C num</code>.</p>
<pre><code>grep -C 3 foo README.txt
</code></pre>
<p>This will show 3 lines before and 3 lines after.</p>
| <pre><code>$ grep thestring thefile -5
</code></pre>
<p><code>-5</code> gets you <code>5</code> lines above and below the match 'thestring' is equivalent to <code>-C 5</code> or <code>-A 5 -B 5</code>.</p>
| 3,055 |